source: palm/trunk/SOURCE/init_pegrid.f90 @ 1217

Last change on this file since 1217 was 1213, checked in by raasch, 11 years ago

last commit documented

  • Property svn:keywords set to Id
File size: 41.2 KB
RevLine 
[1]1 SUBROUTINE init_pegrid
[1036]2
3!--------------------------------------------------------------------------------!
4! This file is part of PALM.
5!
6! PALM is free software: you can redistribute it and/or modify it under the terms
7! of the GNU General Public License as published by the Free Software Foundation,
8! either version 3 of the License, or (at your option) any later version.
9!
10! PALM is distributed in the hope that it will be useful, but WITHOUT ANY
11! WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
12! A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
13!
14! You should have received a copy of the GNU General Public License along with
15! PALM. If not, see <http://www.gnu.org/licenses/>.
16!
17! Copyright 1997-2012  Leibniz University Hannover
18!--------------------------------------------------------------------------------!
19!
[254]20! Current revisions:
[1]21! -----------------
[760]22!
[1213]23!
[668]24! Former revisions:
25! -----------------
26! $Id: init_pegrid.f90 1213 2013-08-15 09:03:50Z raasch $
27!
[1213]28! 1212 2013-08-15 08:46:27Z raasch
29! error message for poisfft_hybrid removed
30!
[1160]31! 1159 2013-05-21 11:58:22Z fricke
32! dirichlet/neumann and neumann/dirichlet removed
33!
[1140]34! 1139 2013-04-18 07:25:03Z raasch
35! bugfix for calculating the id of the PE carrying the recycling plane
36!
[1112]37! 1111 2013-03-08 23:54:10Z raasch
38! initialization of poisfft moved to module poisfft
39!
[1093]40! 1092 2013-02-02 11:24:22Z raasch
41! unused variables removed
42!
[1057]43! 1056 2012-11-16 15:28:04Z raasch
44! Indices for arrays n.._mg start from zero due to definition of arrays f2 and
45! p2 as automatic arrays in recursive subroutine next_mg_level
46!
[1042]47! 1041 2012-11-06 02:36:29Z raasch
48! a 2d virtual processor topology is used by default for all machines
49!
[1037]50! 1036 2012-10-22 13:43:42Z raasch
51! code put under GPL (PALM 3.9)
52!
[1004]53! 1003 2012-09-14 14:35:53Z raasch
54! subdomains must have identical size (grid matching = "match" removed)
55!
[1002]56! 1001 2012-09-13 14:08:46Z raasch
57! all actions concerning upstream-spline-method removed
58!
[979]59! 978 2012-08-09 08:28:32Z fricke
60! dirichlet/neumann and neumann/dirichlet added
61! nxlu and nysv are also calculated for inflow boundary
62!
[810]63! 809 2012-01-30 13:32:58Z maronga
64! Bugfix: replaced .AND. and .NOT. with && and ! in the preprocessor directives
65!
[808]66! 807 2012-01-25 11:53:51Z maronga
67! New cpp directive "__check" implemented which is used by check_namelist_files
68!
[781]69! 780 2011-11-10 07:16:47Z raasch
70! Bugfix for rev 778: Misplaced error message moved to the rigth place
71!
[779]72! 778 2011-11-07 14:18:25Z fricke
73! Calculation of subdomain_size now considers the number of ghost points.
74! Further coarsening on PE0 is now possible for multigrid solver if the
75! collected field has more grid points than the subdomain of an PE.
76!
[760]77! 759 2011-09-15 13:58:31Z raasch
78! calculation of number of io_blocks and the io_group to which the respective
79! PE belongs
80!
[756]81! 755 2011-08-29 09:55:16Z witha
82! 2d-decomposition is default for lcflow (ForWind cluster in Oldenburg)
83!
[723]84! 722 2011-04-11 06:21:09Z raasch
85! Bugfix: bc_lr/ns_cyc/dirrad/raddir replaced by bc_lr/ns, because variables
86!         are not yet set here; grid_level set to 0
87!
[710]88! 709 2011-03-30 09:31:40Z raasch
89! formatting adjustments
90!
[708]91! 707 2011-03-29 11:39:40Z raasch
92! bc_lr/ns replaced by bc_lr/ns_cyc/dirrad/raddir
93!
[668]94! 667 2010-12-23 12:06:00Z suehring/gryschka
[667]95! Moved determination of target_id's from init_coupling
[669]96! Determination of parameters needed for coupling (coupling_topology, ngp_a,
97! ngp_o) with different grid/processor-topology in ocean and atmosphere
[667]98! Adaption of ngp_xy, ngp_y to a dynamic number of ghost points.
99! The maximum_grid_level changed from 1 to 0. 0 is the normal grid, 1 to
100! maximum_grid_level the grids for multigrid, in which 0 and 1 are normal grids.
101! This distinction is due to reasons of data exchange and performance for the
102! normal grid and grids in poismg.
103! The definition of MPI-Vectors adapted to a dynamic numer of ghost points.
104! New MPI-Vectors for data exchange between left and right boundaries added.
105! This is due to reasons of performance (10% faster).
[77]106!
[647]107! 646 2010-12-15 13:03:52Z raasch
108! lctit is now using a 2d decomposition by default
109!
[623]110! 622 2010-12-10 08:08:13Z raasch
111! optional barriers included in order to speed up collective operations
112!
[482]113! 438 2010-02-01 04:32:43Z raasch
114! 2d-decomposition is default for Cray-XT machines
[77]115!
[392]116! 274 2009-03-26 15:11:21Z heinze
117! Output of messages replaced by message handling routine.
118!
[226]119! 206 2008-10-13 14:59:11Z raasch
120! Implementation of a MPI-1 coupling: added __parallel within the __mpi2 part
121! 2d-decomposition is default on SGI-ICE systems
122!
[198]123! 197 2008-09-16 15:29:03Z raasch
124! multigrid levels are limited by subdomains if mg_switch_to_pe0_level = -1,
125! nz is used instead nnz for calculating mg-levels
126! Collect on PE0 horizontal index bounds from all other PEs,
127! broadcast the id of the inflow PE (using the respective communicator)
128!
[139]129! 114 2007-10-10 00:03:15Z raasch
130! Allocation of wall flag arrays for multigrid solver
131!
[110]132! 108 2007-08-24 15:10:38Z letzel
133! Intercommunicator (comm_inter) and derived data type (type_xy) for
134! coupled model runs created, assign coupling_mode_remote,
135! indices nxlu and nysv are calculated (needed for non-cyclic boundary
136! conditions)
137!
[83]138! 82 2007-04-16 15:40:52Z raasch
139! Cpp-directive lcmuk changed to intel_openmp_bug, setting of host on lcmuk by
140! cpp-directive removed
141!
[77]142! 75 2007-03-22 09:54:05Z raasch
[73]143! uxrp, vynp eliminated,
[75]144! dirichlet/neumann changed to dirichlet/radiation, etc.,
145! poisfft_init is only called if fft-solver is switched on
[1]146!
[3]147! RCS Log replace by Id keyword, revision history cleaned up
148!
[1]149! Revision 1.28  2006/04/26 13:23:32  raasch
150! lcmuk does not understand the !$ comment so a cpp-directive is required
151!
152! Revision 1.1  1997/07/24 11:15:09  raasch
153! Initial revision
154!
155!
156! Description:
157! ------------
158! Determination of the virtual processor topology (if not prescribed by the
159! user)and computation of the grid point number and array bounds of the local
160! domains.
161!------------------------------------------------------------------------------!
162
163    USE control_parameters
[163]164    USE grid_variables
[1]165    USE indices
166    USE pegrid
167    USE statistics
168    USE transpose_indices
169
170
[667]171
[1]172    IMPLICIT NONE
173
[778]174    INTEGER ::  i, id_inflow_l, id_recycling_l, ind(5), j, k,                &
[151]175                maximum_grid_level_l, mg_switch_to_pe0_level_l, mg_levels_x, &
176                mg_levels_y, mg_levels_z, nnx_y, nnx_z, nny_x, nny_z, nnz_x, &
[1092]177                nnz_y, numproc_sqr, nxl_l, nxr_l, nyn_l, nys_l,    &
[778]178                nzb_l, nzt_l, omp_get_num_threads
[1]179
180    INTEGER, DIMENSION(:), ALLOCATABLE ::  ind_all, nxlf, nxrf, nynf, nysf
181
[667]182    INTEGER, DIMENSION(2) :: pdims_remote
183
[1092]184#if defined( __mpi2 )
[1]185    LOGICAL ::  found
[1092]186#endif
[1]187
188!
189!-- Get the number of OpenMP threads
190    !$OMP PARALLEL
[82]191#if defined( __intel_openmp_bug )
[1]192    threads_per_task = omp_get_num_threads()
193#else
194!$  threads_per_task = omp_get_num_threads()
195#endif
196    !$OMP END PARALLEL
197
198
199#if defined( __parallel )
[667]200
[1]201!
202!-- Determine the processor topology or check it, if prescribed by the user
203    IF ( npex == -1  .AND.  npey == -1 )  THEN
204
205!
206!--    Automatic determination of the topology
[1041]207       numproc_sqr = SQRT( REAL( numprocs ) )
208       pdims(1)    = MAX( numproc_sqr , 1 )
209       DO  WHILE ( MOD( numprocs , pdims(1) ) /= 0 )
210          pdims(1) = pdims(1) - 1
211       ENDDO
212       pdims(2) = numprocs / pdims(1)
[1]213
214    ELSEIF ( npex /= -1  .AND.  npey /= -1 )  THEN
215
216!
217!--    Prescribed by user. Number of processors on the prescribed topology
218!--    must be equal to the number of PEs available to the job
219       IF ( ( npex * npey ) /= numprocs )  THEN
[274]220          WRITE( message_string, * ) 'number of PEs of the prescribed ',      & 
221                 'topology (', npex*npey,') does not match & the number of ', & 
222                 'PEs available to the job (', numprocs, ')'
[254]223          CALL message( 'init_pegrid', 'PA0221', 1, 2, 0, 6, 0 )
[1]224       ENDIF
225       pdims(1) = npex
226       pdims(2) = npey
227
228    ELSE
229!
230!--    If the processor topology is prescribed by the user, the number of
231!--    PEs must be given in both directions
[274]232       message_string = 'if the processor topology is prescribed by the, ' //  &
233                   ' user& both values of "npex" and "npey" must be given ' // &
234                   'in the &NAMELIST-parameter file'
[254]235       CALL message( 'init_pegrid', 'PA0222', 1, 2, 0, 6, 0 )
[1]236
237    ENDIF
238
239!
[622]240!-- For communication speedup, set barriers in front of collective
241!-- communications by default on SGI-type systems
242    IF ( host(3:5) == 'sgi' )  collective_wait = .TRUE.
243
244!
[1]245!-- If necessary, set horizontal boundary conditions to non-cyclic
[722]246    IF ( bc_lr /= 'cyclic' )  cyclic(1) = .FALSE.
247    IF ( bc_ns /= 'cyclic' )  cyclic(2) = .FALSE.
[1]248
[807]249
[809]250#if ! defined( __check)
[1]251!
252!-- Create the virtual processor grid
253    CALL MPI_CART_CREATE( comm_palm, ndim, pdims, cyclic, reorder, &
254                          comm2d, ierr )
255    CALL MPI_COMM_RANK( comm2d, myid, ierr )
256    WRITE (myid_char,'(''_'',I4.4)')  myid
257
258    CALL MPI_CART_COORDS( comm2d, myid, ndim, pcoord, ierr )
259    CALL MPI_CART_SHIFT( comm2d, 0, 1, pleft, pright, ierr )
260    CALL MPI_CART_SHIFT( comm2d, 1, 1, psouth, pnorth, ierr )
261
262!
263!-- Determine sub-topologies for transpositions
264!-- Transposition from z to x:
265    remain_dims(1) = .TRUE.
266    remain_dims(2) = .FALSE.
267    CALL MPI_CART_SUB( comm2d, remain_dims, comm1dx, ierr )
268    CALL MPI_COMM_RANK( comm1dx, myidx, ierr )
269!
270!-- Transposition from x to y
271    remain_dims(1) = .FALSE.
272    remain_dims(2) = .TRUE.
273    CALL MPI_CART_SUB( comm2d, remain_dims, comm1dy, ierr )
274    CALL MPI_COMM_RANK( comm1dy, myidy, ierr )
275
[807]276#endif
[1]277
278!
[1003]279!-- Calculate array bounds along x-direction for every PE.
[1]280    ALLOCATE( nxlf(0:pdims(1)-1), nxrf(0:pdims(1)-1), nynf(0:pdims(2)-1), &
[1003]281              nysf(0:pdims(2)-1) )
[1]282
[1003]283    IF ( MOD( nx+1 , pdims(1) ) /= 0 )  THEN
[274]284       WRITE( message_string, * ) 'x-direction: gridpoint number (',nx+1,') ',&
285                               'is not an& integral divisor of the number ',  &
286                               'processors (', pdims(1),')'
[254]287       CALL message( 'init_pegrid', 'PA0225', 1, 2, 0, 6, 0 )
[1]288    ELSE
[1003]289       nnx  = ( nx + 1 ) / pdims(1)
[1]290       IF ( nnx*pdims(1) - ( nx + 1) > nnx )  THEN
[274]291          WRITE( message_string, * ) 'x-direction: nx does not match the',    & 
292                       'requirements given by the number of PEs &used',       &
293                       '& please use nx = ', nx - ( pdims(1) - ( nnx*pdims(1) &
294                                   - ( nx + 1 ) ) ), ' instead of nx =', nx
[254]295          CALL message( 'init_pegrid', 'PA0226', 1, 2, 0, 6, 0 )
[1]296       ENDIF
297    ENDIF   
298
299!
300!-- Left and right array bounds, number of gridpoints
301    DO  i = 0, pdims(1)-1
302       nxlf(i)   = i * nnx
303       nxrf(i)   = ( i + 1 ) * nnx - 1
304    ENDDO
305
306!
307!-- Calculate array bounds in y-direction for every PE.
[1003]308    IF ( MOD( ny+1 , pdims(2) ) /= 0 )  THEN
[274]309       WRITE( message_string, * ) 'y-direction: gridpoint number (',ny+1,') ', &
310                           'is not an& integral divisor of the number of',     &
311                           'processors (', pdims(2),')'
[254]312       CALL message( 'init_pegrid', 'PA0227', 1, 2, 0, 6, 0 )
[1]313    ELSE
[1003]314       nny  = ( ny + 1 ) / pdims(2)
[1]315       IF ( nny*pdims(2) - ( ny + 1) > nny )  THEN
[274]316          WRITE( message_string, * ) 'y-direction: ny does not match the',    &
317                       'requirements given by the number of PEs &used ',      &
318                       '& please use ny = ', ny - ( pdims(2) - ( nnx*pdims(2) &
[254]319                                     - ( ny + 1 ) ) ), ' instead of ny =', ny
320          CALL message( 'init_pegrid', 'PA0228', 1, 2, 0, 6, 0 )
[1]321       ENDIF
322    ENDIF   
323
324!
325!-- South and north array bounds
326    DO  j = 0, pdims(2)-1
327       nysf(j)   = j * nny
328       nynf(j)   = ( j + 1 ) * nny - 1
329    ENDDO
330
331!
332!-- Local array bounds of the respective PEs
[1003]333    nxl = nxlf(pcoord(1))
334    nxr = nxrf(pcoord(1))
335    nys = nysf(pcoord(2))
336    nyn = nynf(pcoord(2))
337    nzb = 0
338    nzt = nz
339    nnz = nz
[1]340
341!
[707]342!-- Set switches to define if the PE is situated at the border of the virtual
343!-- processor grid
344    IF ( nxl == 0 )   left_border_pe  = .TRUE.
345    IF ( nxr == nx )  right_border_pe = .TRUE.
346    IF ( nys == 0 )   south_border_pe = .TRUE.
347    IF ( nyn == ny )  north_border_pe = .TRUE.
348
349!
[1]350!-- Calculate array bounds and gridpoint numbers for the transposed arrays
351!-- (needed in the pressure solver)
352!-- For the transposed arrays, cyclic boundaries as well as top and bottom
353!-- boundaries are omitted, because they are obstructive to the transposition
354
355!
356!-- 1. transposition  z --> x
[1001]357!-- This transposition is not neccessary in case of a 1d-decomposition along x
358    IF ( pdims(2) /= 1 )  THEN
[1]359
[1003]360       nys_x = nys
361       nyn_x = nyn
362       nny_x = nny
363       IF ( MOD( nz , pdims(1) ) /= 0 )  THEN
[274]364          WRITE( message_string, * ) 'transposition z --> x:',                &
365                       '&nz=',nz,' is not an integral divisior of pdims(1)=', &
366                                                                   pdims(1)
[254]367          CALL message( 'init_pegrid', 'PA0230', 1, 2, 0, 6, 0 )
[1]368       ENDIF
[1003]369       nnz_x = nz / pdims(1)
370       nzb_x = 1 + myidx * nnz_x
371       nzt_x = ( myidx + 1 ) * nnz_x
[1]372       sendrecvcount_zx = nnx * nny * nnz_x
373
[181]374    ELSE
375!
376!---   Setting of dummy values because otherwise variables are undefined in
377!---   the next step  x --> y
378!---   WARNING: This case has still to be clarified!!!!!!!!!!!!
[1003]379       nnz_x = 1
380       nzb_x = 1
381       nzt_x = 1
382       nny_x = nny
[181]383
[1]384    ENDIF
385
386!
387!-- 2. transposition  x --> y
[1003]388    nnz_y = nnz_x
389    nzb_y = nzb_x
390    nzt_y = nzt_x
391    IF ( MOD( nx+1 , pdims(2) ) /= 0 )  THEN
[274]392       WRITE( message_string, * ) 'transposition x --> y:',                &
393                         '&nx+1=',nx+1,' is not an integral divisor of ',&
394                         'pdims(2)=',pdims(2)
[254]395       CALL message( 'init_pegrid', 'PA0231', 1, 2, 0, 6, 0 )
[1]396    ENDIF
[1003]397    nnx_y = (nx+1) / pdims(2)
[1]398    nxl_y = myidy * nnx_y
[1003]399    nxr_y = ( myidy + 1 ) * nnx_y - 1
[1]400    sendrecvcount_xy = nnx_y * nny_x * nnz_y
401
402!
403!-- 3. transposition  y --> z  (ELSE:  x --> y  in case of 1D-decomposition
404!-- along x)
[1001]405    IF ( pdims(2) /= 1 )  THEN
[1]406!
407!--    y --> z
408!--    This transposition is not neccessary in case of a 1d-decomposition
409!--    along x, except that the uptream-spline method is switched on
[1003]410       nnx_z = nnx_y
411       nxl_z = nxl_y
412       nxr_z = nxr_y
413       IF ( MOD( ny+1 , pdims(1) ) /= 0 )  THEN
[274]414          WRITE( message_string, * ) 'transposition y --> z:',            &
415                            '& ny+1=',ny+1,' is not an integral divisor of ',&
416                            'pdims(1)=',pdims(1)
[254]417          CALL message( 'init_pegrid', 'PA0232', 1, 2, 0, 6, 0 )
[1]418       ENDIF
[1003]419       nny_z = (ny+1) / pdims(1)
420       nys_z = myidx * nny_z
421       nyn_z = ( myidx + 1 ) * nny_z - 1
[1]422       sendrecvcount_yz = nnx_y * nny_z * nnz_y
423
424    ELSE
425!
426!--    x --> y. This condition must be fulfilled for a 1D-decomposition along x
[1003]427       IF ( MOD( ny+1 , pdims(1) ) /= 0 )  THEN
[274]428          WRITE( message_string, * ) 'transposition x --> y:',               &
429                            '& ny+1=',ny+1,' is not an integral divisor of ',&
430                            'pdims(1)=',pdims(1)
[254]431          CALL message( 'init_pegrid', 'PA0233', 1, 2, 0, 6, 0 )
[1]432       ENDIF
433
434    ENDIF
435
436!
437!-- Indices for direct transpositions z --> y (used for calculating spectra)
438    IF ( dt_dosp /= 9999999.9 )  THEN
[1003]439       IF ( MOD( nz, pdims(2) ) /= 0 )  THEN
[274]440          WRITE( message_string, * ) 'direct transposition z --> y (needed ', &
441                    'for spectra):& nz=',nz,' is not an integral divisor of ',&
442                    'pdims(2)=',pdims(2)
[254]443          CALL message( 'init_pegrid', 'PA0234', 1, 2, 0, 6, 0 )
[1]444       ELSE
[1003]445          nxl_yd = nxl
446          nxr_yd = nxr
447          nzb_yd = 1 + myidy * ( nz / pdims(2) )
448          nzt_yd = ( myidy + 1 ) * ( nz / pdims(2) )
449          sendrecvcount_zyd = nnx * nny * ( nz / pdims(2) )
[1]450       ENDIF
451    ENDIF
452
453!
454!-- Indices for direct transpositions y --> x (they are only possible in case
455!-- of a 1d-decomposition along x)
456    IF ( pdims(2) == 1 )  THEN
[1003]457       nny_x = nny / pdims(1)
458       nys_x = myid * nny_x
459       nyn_x = ( myid + 1 ) * nny_x - 1
460       nzb_x = 1
461       nzt_x = nz
462       sendrecvcount_xy = nnx * nny_x * nz
[1]463    ENDIF
464
465!
466!-- Indices for direct transpositions x --> y (they are only possible in case
467!-- of a 1d-decomposition along y)
468    IF ( pdims(1) == 1 )  THEN
[1003]469       nnx_y = nnx / pdims(2)
470       nxl_y = myid * nnx_y
471       nxr_y = ( myid + 1 ) * nnx_y - 1
472       nzb_y = 1
473       nzt_y = nz
474       sendrecvcount_xy = nnx_y * nny * nz
[1]475    ENDIF
476
477!
478!-- Arrays for storing the array bounds are needed any more
479    DEALLOCATE( nxlf , nxrf , nynf , nysf )
480
[807]481
[809]482#if ! defined( __check)
[145]483!
484!-- Collect index bounds from other PEs (to be written to restart file later)
485    ALLOCATE( hor_index_bounds(4,0:numprocs-1) )
486
487    IF ( myid == 0 )  THEN
488
489       hor_index_bounds(1,0) = nxl
490       hor_index_bounds(2,0) = nxr
491       hor_index_bounds(3,0) = nys
492       hor_index_bounds(4,0) = nyn
493
494!
495!--    Receive data from all other PEs
496       DO  i = 1, numprocs-1
497          CALL MPI_RECV( ibuf, 4, MPI_INTEGER, i, MPI_ANY_TAG, comm2d, status, &
498                         ierr )
499          hor_index_bounds(:,i) = ibuf(1:4)
500       ENDDO
501
502    ELSE
503!
504!--    Send index bounds to PE0
505       ibuf(1) = nxl
506       ibuf(2) = nxr
507       ibuf(3) = nys
508       ibuf(4) = nyn
509       CALL MPI_SEND( ibuf, 4, MPI_INTEGER, 0, myid, comm2d, ierr )
510
511    ENDIF
512
[807]513#endif
514
[1]515#if defined( __print )
516!
517!-- Control output
518    IF ( myid == 0 )  THEN
519       PRINT*, '*** processor topology ***'
520       PRINT*, ' '
521       PRINT*, 'myid   pcoord    left right  south north  idx idy   nxl: nxr',&
522               &'   nys: nyn'
523       PRINT*, '------------------------------------------------------------',&
524               &'-----------'
525       WRITE (*,1000)  0, pcoord(1), pcoord(2), pleft, pright, psouth, pnorth, &
526                       myidx, myidy, nxl, nxr, nys, nyn
5271000   FORMAT (I4,2X,'(',I3,',',I3,')',3X,I4,2X,I4,3X,I4,2X,I4,2X,I3,1X,I3, &
528               2(2X,I4,':',I4))
529
530!
[108]531!--    Receive data from the other PEs
[1]532       DO  i = 1,numprocs-1
533          CALL MPI_RECV( ibuf, 12, MPI_INTEGER, i, MPI_ANY_TAG, comm2d, status, &
534                         ierr )
535          WRITE (*,1000)  i, ( ibuf(j) , j = 1,12 )
536       ENDDO
537    ELSE
538
539!
540!--    Send data to PE0
541       ibuf(1) = pcoord(1); ibuf(2) = pcoord(2); ibuf(3) = pleft
542       ibuf(4) = pright; ibuf(5) = psouth; ibuf(6) = pnorth; ibuf(7) = myidx
543       ibuf(8) = myidy; ibuf(9) = nxl; ibuf(10) = nxr; ibuf(11) = nys
544       ibuf(12) = nyn
545       CALL MPI_SEND( ibuf, 12, MPI_INTEGER, 0, myid, comm2d, ierr )       
546    ENDIF
547#endif
548
[809]549#if defined( __parallel ) && ! defined( __check)
[102]550#if defined( __mpi2 )
551!
552!-- In case of coupled runs, get the port name on PE0 of the atmosphere model
553!-- and pass it to PE0 of the ocean model
554    IF ( myid == 0 )  THEN
555
556       IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN
557
558          CALL MPI_OPEN_PORT( MPI_INFO_NULL, port_name, ierr )
[108]559
[102]560          CALL MPI_PUBLISH_NAME( 'palm_coupler', MPI_INFO_NULL, port_name, &
561                                 ierr )
[108]562
563!
[104]564!--       Write a flag file for the ocean model and the other atmosphere
565!--       processes.
566!--       There seems to be a bug in MPICH2 which causes hanging processes
567!--       in case that execution of LOOKUP_NAME is continued too early
568!--       (i.e. before the port has been created)
569          OPEN( 90, FILE='COUPLING_PORT_OPENED', FORM='FORMATTED' )
570          WRITE ( 90, '(''TRUE'')' )
571          CLOSE ( 90 )
[102]572
573       ELSEIF ( coupling_mode == 'ocean_to_atmosphere' )  THEN
574
[104]575!
576!--       Continue only if the atmosphere model has created the port.
577!--       There seems to be a bug in MPICH2 which causes hanging processes
578!--       in case that execution of LOOKUP_NAME is continued too early
579!--       (i.e. before the port has been created)
580          INQUIRE( FILE='COUPLING_PORT_OPENED', EXIST=found )
581          DO WHILE ( .NOT. found )
582             INQUIRE( FILE='COUPLING_PORT_OPENED', EXIST=found )
583          ENDDO
584
[102]585          CALL MPI_LOOKUP_NAME( 'palm_coupler', MPI_INFO_NULL, port_name, ierr )
586
587       ENDIF
588
589    ENDIF
590
591!
592!-- In case of coupled runs, establish the connection between the atmosphere
593!-- and the ocean model and define the intercommunicator (comm_inter)
594    CALL MPI_BARRIER( comm2d, ierr )
595    IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN
596
597       CALL MPI_COMM_ACCEPT( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &
598                             comm_inter, ierr )
[108]599       coupling_mode_remote = 'ocean_to_atmosphere'
600
[102]601    ELSEIF ( coupling_mode == 'ocean_to_atmosphere' )  THEN
602
603       CALL MPI_COMM_CONNECT( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &
604                              comm_inter, ierr )
[108]605       coupling_mode_remote = 'atmosphere_to_ocean'
606
[102]607    ENDIF
[206]608#endif
[102]609
[667]610!
[709]611!-- Determine the number of ghost point layers
612    IF ( scalar_advec == 'ws-scheme' .OR. momentum_advec == 'ws-scheme' )  THEN
[667]613       nbgp = 3
614    ELSE
615       nbgp = 1
[709]616    ENDIF 
[667]617
[102]618!
[709]619!-- Create a new MPI derived datatype for the exchange of surface (xy) data,
620!-- which is needed for coupled atmosphere-ocean runs.
621!-- First, calculate number of grid points of an xy-plane.
[667]622    ngp_xy  = ( nxr - nxl + 1 + 2 * nbgp ) * ( nyn - nys + 1 + 2 * nbgp )
[102]623    CALL MPI_TYPE_VECTOR( ngp_xy, 1, nzt-nzb+2, MPI_REAL, type_xy, ierr )
624    CALL MPI_TYPE_COMMIT( type_xy, ierr )
[667]625
[709]626    IF ( TRIM( coupling_mode ) /= 'uncoupled' )  THEN
[667]627   
628!
629!--    Pass the number of grid points of the atmosphere model to
630!--    the ocean model and vice versa
631       IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN
632
633          nx_a = nx
634          ny_a = ny
635
[709]636          IF ( myid == 0 )  THEN
637
638             CALL MPI_SEND( nx_a, 1, MPI_INTEGER, numprocs, 1, comm_inter,  &
639                            ierr )
640             CALL MPI_SEND( ny_a, 1, MPI_INTEGER, numprocs, 2, comm_inter,  &
641                            ierr )
642             CALL MPI_SEND( pdims, 2, MPI_INTEGER, numprocs, 3, comm_inter, &
643                            ierr )
644             CALL MPI_RECV( nx_o, 1, MPI_INTEGER, numprocs, 4, comm_inter,  &
645                            status, ierr )
646             CALL MPI_RECV( ny_o, 1, MPI_INTEGER, numprocs, 5, comm_inter,  &
647                            status, ierr )
648             CALL MPI_RECV( pdims_remote, 2, MPI_INTEGER, numprocs, 6,      &
[667]649                            comm_inter, status, ierr )
650          ENDIF
651
[709]652          CALL MPI_BCAST( nx_o, 1, MPI_INTEGER, 0, comm2d, ierr )
653          CALL MPI_BCAST( ny_o, 1, MPI_INTEGER, 0, comm2d, ierr ) 
654          CALL MPI_BCAST( pdims_remote, 2, MPI_INTEGER, 0, comm2d, ierr )
[667]655       
656       ELSEIF ( coupling_mode == 'ocean_to_atmosphere' )  THEN
657
658          nx_o = nx
659          ny_o = ny 
660
661          IF ( myid == 0 ) THEN
[709]662
663             CALL MPI_RECV( nx_a, 1, MPI_INTEGER, 0, 1, comm_inter, status, &
664                            ierr )
665             CALL MPI_RECV( ny_a, 1, MPI_INTEGER, 0, 2, comm_inter, status, &
666                            ierr )
667             CALL MPI_RECV( pdims_remote, 2, MPI_INTEGER, 0, 3, comm_inter, &
668                            status, ierr )
669             CALL MPI_SEND( nx_o, 1, MPI_INTEGER, 0, 4, comm_inter, ierr )
670             CALL MPI_SEND( ny_o, 1, MPI_INTEGER, 0, 5, comm_inter, ierr )
671             CALL MPI_SEND( pdims, 2, MPI_INTEGER, 0, 6, comm_inter, ierr )
[667]672          ENDIF
673
674          CALL MPI_BCAST( nx_a, 1, MPI_INTEGER, 0, comm2d, ierr)
675          CALL MPI_BCAST( ny_a, 1, MPI_INTEGER, 0, comm2d, ierr) 
676          CALL MPI_BCAST( pdims_remote, 2, MPI_INTEGER, 0, comm2d, ierr) 
677
678       ENDIF
679 
[709]680       ngp_a = ( nx_a+1 + 2 * nbgp ) * ( ny_a+1 + 2 * nbgp )
681       ngp_o = ( nx_o+1 + 2 * nbgp ) * ( ny_o+1 + 2 * nbgp )
[667]682
683!
[709]684!--    Determine if the horizontal grid and the number of PEs in ocean and
685!--    atmosphere is same or not
686       IF ( nx_o == nx_a  .AND.  ny_o == ny_a  .AND.  &
[667]687            pdims(1) == pdims_remote(1) .AND. pdims(2) == pdims_remote(2) ) &
688       THEN
689          coupling_topology = 0
690       ELSE
691          coupling_topology = 1
692       ENDIF 
693
694!
695!--    Determine the target PEs for the exchange between ocean and
696!--    atmosphere (comm2d)
[709]697       IF ( coupling_topology == 0 )  THEN
698!
699!--       In case of identical topologies, every atmosphere PE has exactly one
700!--       ocean PE counterpart and vice versa
701          IF ( TRIM( coupling_mode ) == 'atmosphere_to_ocean' ) THEN
[667]702             target_id = myid + numprocs
703          ELSE
704             target_id = myid 
705          ENDIF
706
707       ELSE
708!
709!--       In case of nonequivalent topology in ocean and atmosphere only for
710!--       PE0 in ocean and PE0 in atmosphere a target_id is needed, since
[709]711!--       data echxchange between ocean and atmosphere will be done only
712!--       between these PEs.   
713          IF ( myid == 0 )  THEN
714
715             IF ( TRIM( coupling_mode ) == 'atmosphere_to_ocean' )  THEN
[667]716                target_id = numprocs 
717             ELSE
718                target_id = 0
719             ENDIF
[709]720
[667]721          ENDIF
[709]722
[667]723       ENDIF
724
725    ENDIF
726
727
[102]728#endif
729
[1]730#else
731
732!
733!-- Array bounds when running on a single PE (respectively a non-parallel
734!-- machine)
[1003]735    nxl = 0
736    nxr = nx
737    nnx = nxr - nxl + 1
738    nys = 0
739    nyn = ny
740    nny = nyn - nys + 1
741    nzb = 0
742    nzt = nz
743    nnz = nz
[1]744
[145]745    ALLOCATE( hor_index_bounds(4,0:0) )
746    hor_index_bounds(1,0) = nxl
747    hor_index_bounds(2,0) = nxr
748    hor_index_bounds(3,0) = nys
749    hor_index_bounds(4,0) = nyn
750
[1]751!
752!-- Array bounds for the pressure solver (in the parallel code, these bounds
753!-- are the ones for the transposed arrays)
[1003]754    nys_x = nys
755    nyn_x = nyn
756    nzb_x = nzb + 1
757    nzt_x = nzt
[1]758
[1003]759    nxl_y = nxl
760    nxr_y = nxr
761    nzb_y = nzb + 1
762    nzt_y = nzt
[1]763
[1003]764    nxl_z = nxl
765    nxr_z = nxr
766    nys_z = nys
767    nyn_z = nyn
[1]768
769#endif
770
771!
772!-- Calculate number of grid levels necessary for the multigrid poisson solver
773!-- as well as the gridpoint indices on each level
774    IF ( psolver == 'multigrid' )  THEN
775
776!
777!--    First calculate number of possible grid levels for the subdomains
778       mg_levels_x = 1
779       mg_levels_y = 1
780       mg_levels_z = 1
781
782       i = nnx
783       DO WHILE ( MOD( i, 2 ) == 0  .AND.  i /= 2 )
784          i = i / 2
785          mg_levels_x = mg_levels_x + 1
786       ENDDO
787
788       j = nny
789       DO WHILE ( MOD( j, 2 ) == 0  .AND.  j /= 2 )
790          j = j / 2
791          mg_levels_y = mg_levels_y + 1
792       ENDDO
793
[181]794       k = nz    ! do not use nnz because it might be > nz due to transposition
795                 ! requirements
[1]796       DO WHILE ( MOD( k, 2 ) == 0  .AND.  k /= 2 )
797          k = k / 2
798          mg_levels_z = mg_levels_z + 1
799       ENDDO
800
801       maximum_grid_level = MIN( mg_levels_x, mg_levels_y, mg_levels_z )
802
803!
804!--    Find out, if the total domain allows more levels. These additional
[709]805!--    levels are identically processed on all PEs.
[197]806       IF ( numprocs > 1  .AND.  mg_switch_to_pe0_level /= -1 )  THEN
[709]807
[1]808          IF ( mg_levels_z > MIN( mg_levels_x, mg_levels_y ) )  THEN
[709]809
[1]810             mg_switch_to_pe0_level_l = maximum_grid_level
811
812             mg_levels_x = 1
813             mg_levels_y = 1
814
815             i = nx+1
816             DO WHILE ( MOD( i, 2 ) == 0  .AND.  i /= 2 )
817                i = i / 2
818                mg_levels_x = mg_levels_x + 1
819             ENDDO
820
821             j = ny+1
822             DO WHILE ( MOD( j, 2 ) == 0  .AND.  j /= 2 )
823                j = j / 2
824                mg_levels_y = mg_levels_y + 1
825             ENDDO
826
827             maximum_grid_level_l = MIN( mg_levels_x, mg_levels_y, mg_levels_z )
828
829             IF ( maximum_grid_level_l > mg_switch_to_pe0_level_l )  THEN
830                mg_switch_to_pe0_level_l = maximum_grid_level_l - &
831                                           mg_switch_to_pe0_level_l + 1
832             ELSE
833                mg_switch_to_pe0_level_l = 0
834             ENDIF
[709]835
[1]836          ELSE
837             mg_switch_to_pe0_level_l = 0
838             maximum_grid_level_l = maximum_grid_level
[709]839
[1]840          ENDIF
841
842!
843!--       Use switch level calculated above only if it is not pre-defined
844!--       by user
845          IF ( mg_switch_to_pe0_level == 0 )  THEN
846             IF ( mg_switch_to_pe0_level_l /= 0 )  THEN
847                mg_switch_to_pe0_level = mg_switch_to_pe0_level_l
848                maximum_grid_level     = maximum_grid_level_l
849             ENDIF
850
851          ELSE
852!
853!--          Check pre-defined value and reset to default, if neccessary
854             IF ( mg_switch_to_pe0_level < mg_switch_to_pe0_level_l  .OR.  &
855                  mg_switch_to_pe0_level >= maximum_grid_level_l )  THEN
[254]856                message_string = 'mg_switch_to_pe0_level ' // &
857                                 'out of range and reset to default (=0)'
858                CALL message( 'init_pegrid', 'PA0235', 0, 1, 0, 6, 0 )
[1]859                mg_switch_to_pe0_level = 0
860             ELSE
861!
862!--             Use the largest number of possible levels anyway and recalculate
863!--             the switch level to this largest number of possible values
864                maximum_grid_level = maximum_grid_level_l
865
866             ENDIF
[709]867
[1]868          ENDIF
869
870       ENDIF
871
[1056]872       ALLOCATE( grid_level_count(maximum_grid_level),                       &
873                 nxl_mg(0:maximum_grid_level), nxr_mg(0:maximum_grid_level), &
874                 nyn_mg(0:maximum_grid_level), nys_mg(0:maximum_grid_level), &
875                 nzt_mg(0:maximum_grid_level) )
[1]876
877       grid_level_count = 0
[1056]878!
879!--    Index zero required as dummy due to definition of arrays f2 and p2 in
880!--    recursive subroutine next_mg_level
881       nxl_mg(0) = 0; nxr_mg(0) = 0; nyn_mg(0) = 0; nys_mg(0) = 0; nzt_mg(0) = 0
[778]882
[1]883       nxl_l = nxl; nxr_l = nxr; nys_l = nys; nyn_l = nyn; nzt_l = nzt
884
885       DO  i = maximum_grid_level, 1 , -1
886
887          IF ( i == mg_switch_to_pe0_level )  THEN
[809]888#if defined( __parallel ) && ! defined( __check )
[1]889!
890!--          Save the grid size of the subdomain at the switch level, because
891!--          it is needed in poismg.
892             ind(1) = nxl_l; ind(2) = nxr_l
893             ind(3) = nys_l; ind(4) = nyn_l
894             ind(5) = nzt_l
895             ALLOCATE( ind_all(5*numprocs), mg_loc_ind(5,0:numprocs-1) )
896             CALL MPI_ALLGATHER( ind, 5, MPI_INTEGER, ind_all, 5, &
897                                 MPI_INTEGER, comm2d, ierr )
898             DO  j = 0, numprocs-1
899                DO  k = 1, 5
900                   mg_loc_ind(k,j) = ind_all(k+j*5)
901                ENDDO
902             ENDDO
903             DEALLOCATE( ind_all )
904!
[709]905!--          Calculate the grid size of the total domain
[1]906             nxr_l = ( nxr_l-nxl_l+1 ) * pdims(1) - 1
907             nxl_l = 0
908             nyn_l = ( nyn_l-nys_l+1 ) * pdims(2) - 1
909             nys_l = 0
910!
911!--          The size of this gathered array must not be larger than the
912!--          array tend, which is used in the multigrid scheme as a temporary
[778]913!--          array. Therefore the subdomain size of an PE is calculated and
914!--          the size of the gathered grid. These values are used in 
915!--          routines pres and poismg
916             subdomain_size = ( nxr - nxl + 2 * nbgp + 1 ) * &
917                              ( nyn - nys + 2 * nbgp + 1 ) * ( nzt - nzb + 2 )
[1]918             gathered_size  = ( nxr_l - nxl_l + 3 ) * ( nyn_l - nys_l + 3 ) * &
919                              ( nzt_l - nzb + 2 )
920
[809]921#elif ! defined ( __parallel )
[254]922             message_string = 'multigrid gather/scatter impossible ' // &
[1]923                          'in non parallel mode'
[254]924             CALL message( 'init_pegrid', 'PA0237', 1, 2, 0, 6, 0 )
[1]925#endif
926          ENDIF
927
928          nxl_mg(i) = nxl_l
929          nxr_mg(i) = nxr_l
930          nys_mg(i) = nys_l
931          nyn_mg(i) = nyn_l
932          nzt_mg(i) = nzt_l
933
934          nxl_l = nxl_l / 2 
935          nxr_l = nxr_l / 2
936          nys_l = nys_l / 2 
937          nyn_l = nyn_l / 2 
938          nzt_l = nzt_l / 2 
[778]939
[1]940       ENDDO
941
[780]942!
943!--    Temporary problem: Currently calculation of maxerror iin routine poismg crashes
944!--    if grid data are collected on PE0 already on the finest grid level.
945!--    To be solved later.
946       IF ( maximum_grid_level == mg_switch_to_pe0_level )  THEN
947          message_string = 'grid coarsening on subdomain level cannot be performed'
948          CALL message( 'poismg', 'PA0236', 1, 2, 0, 6, 0 )
949       ENDIF
950
[1]951    ELSE
952
[667]953       maximum_grid_level = 0
[1]954
955    ENDIF
956
[722]957!
958!-- Default level 0 tells exchange_horiz that all ghost planes have to be
959!-- exchanged. grid_level is adjusted in poismg, where only one ghost plane
960!-- is required.
961    grid_level = 0
[1]962
[809]963#if defined( __parallel ) && ! defined ( __check )
[1]964!
965!-- Gridpoint number for the exchange of ghost points (y-line for 2D-arrays)
[667]966    ngp_y  = nyn - nys + 1 + 2 * nbgp
[1]967
968!
[709]969!-- Define new MPI derived datatypes for the exchange of ghost points in
970!-- x- and y-direction for 2D-arrays (line)
971    CALL MPI_TYPE_VECTOR( nxr-nxl+1+2*nbgp, nbgp, ngp_y, MPI_REAL, type_x, &
972                          ierr )
[1]973    CALL MPI_TYPE_COMMIT( type_x, ierr )
[709]974    CALL MPI_TYPE_VECTOR( nxr-nxl+1+2*nbgp, nbgp, ngp_y, MPI_INTEGER, &
975                          type_x_int, ierr )
[1]976    CALL MPI_TYPE_COMMIT( type_x_int, ierr )
977
[667]978    CALL MPI_TYPE_VECTOR( nbgp, ngp_y, ngp_y, MPI_REAL, type_y, ierr )
979    CALL MPI_TYPE_COMMIT( type_y, ierr )
980    CALL MPI_TYPE_VECTOR( nbgp, ngp_y, ngp_y, MPI_INTEGER, type_y_int, ierr )
981    CALL MPI_TYPE_COMMIT( type_y_int, ierr )
982
983
[1]984!
985!-- Calculate gridpoint numbers for the exchange of ghost points along x
986!-- (yz-plane for 3D-arrays) and define MPI derived data type(s) for the
987!-- exchange of ghost points in y-direction (xz-plane).
988!-- Do these calculations for the model grid and (if necessary) also
989!-- for the coarser grid levels used in the multigrid method
[667]990    ALLOCATE ( ngp_yz(0:maximum_grid_level), type_xz(0:maximum_grid_level),&
991               type_yz(0:maximum_grid_level) )
[1]992
993    nxl_l = nxl; nxr_l = nxr; nys_l = nys; nyn_l = nyn; nzb_l = nzb; nzt_l = nzt
[709]994
[667]995!
996!-- Discern between the model grid, which needs nbgp ghost points and
997!-- grid levels for the multigrid scheme. In the latter case only one
998!-- ghost point is necessary.
[709]999!-- First definition of MPI-datatypes for exchange of ghost layers on normal
[667]1000!-- grid. The following loop is needed for data exchange in poismg.f90.
1001!
1002!-- Determine number of grid points of yz-layer for exchange
1003    ngp_yz(0) = (nzt - nzb + 2) * (nyn - nys + 1 + 2 * nbgp)
[709]1004
[667]1005!
[709]1006!-- Define an MPI-datatype for the exchange of left/right boundaries.
1007!-- Although data are contiguous in physical memory (which does not
1008!-- necessarily require an MPI-derived datatype), the data exchange between
1009!-- left and right PE's using the MPI-derived type is 10% faster than without.
[667]1010    CALL MPI_TYPE_VECTOR( nxr-nxl+1+2*nbgp, nbgp*(nzt-nzb+2), ngp_yz(0), &
[709]1011                          MPI_REAL, type_xz(0), ierr )
[667]1012    CALL MPI_TYPE_COMMIT( type_xz(0), ierr )
[1]1013
[709]1014    CALL MPI_TYPE_VECTOR( nbgp, ngp_yz(0), ngp_yz(0), MPI_REAL, type_yz(0), &
1015                          ierr ) 
[667]1016    CALL MPI_TYPE_COMMIT( type_yz(0), ierr )
[709]1017
[667]1018!
[709]1019!-- Definition of MPI-datatypes for multigrid method (coarser level grids)
[667]1020    IF ( psolver == 'multigrid' )  THEN
1021!   
[709]1022!--    Definition of MPI-datatyoe as above, but only 1 ghost level is used
1023       DO  i = maximum_grid_level, 1 , -1
1024
[667]1025          ngp_yz(i) = (nzt_l - nzb_l + 2) * (nyn_l - nys_l + 3)
1026
1027          CALL MPI_TYPE_VECTOR( nxr_l-nxl_l+3, nzt_l-nzb_l+2, ngp_yz(i), &
[709]1028                                MPI_REAL, type_xz(i), ierr )
[667]1029          CALL MPI_TYPE_COMMIT( type_xz(i), ierr )
[1]1030
[709]1031          CALL MPI_TYPE_VECTOR( 1, ngp_yz(i), ngp_yz(i), MPI_REAL, type_yz(i), &
1032                                ierr )
[667]1033          CALL MPI_TYPE_COMMIT( type_yz(i), ierr )
1034
1035          nxl_l = nxl_l / 2
1036          nxr_l = nxr_l / 2
1037          nys_l = nys_l / 2
1038          nyn_l = nyn_l / 2
1039          nzt_l = nzt_l / 2
[709]1040
[667]1041       ENDDO
[709]1042
1043    ENDIF
[1]1044#endif
1045
[809]1046#if defined( __parallel ) && ! defined ( __check )
[1]1047!
1048!-- Setting of flags for inflow/outflow conditions in case of non-cyclic
[106]1049!-- horizontal boundary conditions.
[1]1050    IF ( pleft == MPI_PROC_NULL )  THEN
[1159]1051       IF ( bc_lr == 'dirichlet/radiation' )  THEN
[1]1052          inflow_l  = .TRUE.
[1159]1053       ELSEIF ( bc_lr == 'radiation/dirichlet' )  THEN
[1]1054          outflow_l = .TRUE.
1055       ENDIF
1056    ENDIF
1057
1058    IF ( pright == MPI_PROC_NULL )  THEN
[1159]1059       IF ( bc_lr == 'dirichlet/radiation' )  THEN
[1]1060          outflow_r = .TRUE.
[1159]1061       ELSEIF ( bc_lr == 'radiation/dirichlet' )  THEN
[1]1062          inflow_r  = .TRUE.
1063       ENDIF
1064    ENDIF
1065
1066    IF ( psouth == MPI_PROC_NULL )  THEN
[1159]1067       IF ( bc_ns == 'dirichlet/radiation' )  THEN
[1]1068          outflow_s = .TRUE.
[1159]1069       ELSEIF ( bc_ns == 'radiation/dirichlet' )  THEN
[1]1070          inflow_s  = .TRUE.
1071       ENDIF
1072    ENDIF
1073
1074    IF ( pnorth == MPI_PROC_NULL )  THEN
[1159]1075       IF ( bc_ns == 'dirichlet/radiation' )  THEN
[1]1076          inflow_n  = .TRUE.
[1159]1077       ELSEIF ( bc_ns == 'radiation/dirichlet' )  THEN
[1]1078          outflow_n = .TRUE.
1079       ENDIF
1080    ENDIF
1081
[151]1082!
1083!-- Broadcast the id of the inflow PE
1084    IF ( inflow_l )  THEN
[163]1085       id_inflow_l = myidx
[151]1086    ELSE
1087       id_inflow_l = 0
1088    ENDIF
[622]1089    IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
[151]1090    CALL MPI_ALLREDUCE( id_inflow_l, id_inflow, 1, MPI_INTEGER, MPI_SUM, &
1091                        comm1dx, ierr )
1092
[163]1093!
1094!-- Broadcast the id of the recycling plane
1095!-- WARNING: needs to be adjusted in case of inflows other than from left side!
[1139]1096    IF ( NINT( recycling_width / dx ) >= nxl  .AND. &
1097         NINT( recycling_width / dx ) <= nxr )  THEN
[163]1098       id_recycling_l = myidx
1099    ELSE
1100       id_recycling_l = 0
1101    ENDIF
[622]1102    IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
[163]1103    CALL MPI_ALLREDUCE( id_recycling_l, id_recycling, 1, MPI_INTEGER, MPI_SUM, &
1104                        comm1dx, ierr )
1105
[809]1106#elif ! defined ( __parallel )
[1159]1107    IF ( bc_lr == 'dirichlet/radiation' )  THEN
[1]1108       inflow_l  = .TRUE.
1109       outflow_r = .TRUE.
[1159]1110    ELSEIF ( bc_lr == 'radiation/dirichlet' )  THEN
[1]1111       outflow_l = .TRUE.
1112       inflow_r  = .TRUE.
1113    ENDIF
1114
[1159]1115    IF ( bc_ns == 'dirichlet/radiation' )  THEN
[1]1116       inflow_n  = .TRUE.
1117       outflow_s = .TRUE.
[1159]1118    ELSEIF ( bc_ns == 'radiation/dirichlet' )  THEN
[1]1119       outflow_n = .TRUE.
1120       inflow_s  = .TRUE.
1121    ENDIF
1122#endif
[807]1123
[106]1124!
[978]1125!-- At the inflow or outflow, u or v, respectively, have to be calculated for
1126!-- one more grid point.
1127    IF ( inflow_l .OR. outflow_l )  THEN
[106]1128       nxlu = nxl + 1
1129    ELSE
1130       nxlu = nxl
1131    ENDIF
[978]1132    IF ( inflow_s .OR. outflow_s )  THEN
[106]1133       nysv = nys + 1
1134    ELSE
1135       nysv = nys
1136    ENDIF
[1]1137
[114]1138!
1139!-- Allocate wall flag arrays used in the multigrid solver
1140    IF ( psolver == 'multigrid' )  THEN
1141
1142       DO  i = maximum_grid_level, 1, -1
1143
1144           SELECT CASE ( i )
1145
1146              CASE ( 1 )
1147                 ALLOCATE( wall_flags_1(nzb:nzt_mg(i)+1,         &
1148                                        nys_mg(i)-1:nyn_mg(i)+1, &
1149                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1150
1151              CASE ( 2 )
1152                 ALLOCATE( wall_flags_2(nzb:nzt_mg(i)+1,         &
1153                                        nys_mg(i)-1:nyn_mg(i)+1, &
1154                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1155
1156              CASE ( 3 )
1157                 ALLOCATE( wall_flags_3(nzb:nzt_mg(i)+1,         &
1158                                        nys_mg(i)-1:nyn_mg(i)+1, &
1159                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1160
1161              CASE ( 4 )
1162                 ALLOCATE( wall_flags_4(nzb:nzt_mg(i)+1,         &
1163                                        nys_mg(i)-1:nyn_mg(i)+1, &
1164                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1165
1166              CASE ( 5 )
1167                 ALLOCATE( wall_flags_5(nzb:nzt_mg(i)+1,         &
1168                                        nys_mg(i)-1:nyn_mg(i)+1, &
1169                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1170
1171              CASE ( 6 )
1172                 ALLOCATE( wall_flags_6(nzb:nzt_mg(i)+1,         &
1173                                        nys_mg(i)-1:nyn_mg(i)+1, &
1174                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1175
1176              CASE ( 7 )
1177                 ALLOCATE( wall_flags_7(nzb:nzt_mg(i)+1,         &
1178                                        nys_mg(i)-1:nyn_mg(i)+1, &
1179                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1180
1181              CASE ( 8 )
1182                 ALLOCATE( wall_flags_8(nzb:nzt_mg(i)+1,         &
1183                                        nys_mg(i)-1:nyn_mg(i)+1, &
1184                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1185
1186              CASE ( 9 )
1187                 ALLOCATE( wall_flags_9(nzb:nzt_mg(i)+1,         &
1188                                        nys_mg(i)-1:nyn_mg(i)+1, &
1189                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1190
1191              CASE ( 10 )
1192                 ALLOCATE( wall_flags_10(nzb:nzt_mg(i)+1,        &
1193                                        nys_mg(i)-1:nyn_mg(i)+1, &
1194                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1195
1196              CASE DEFAULT
[254]1197                 message_string = 'more than 10 multigrid levels'
1198                 CALL message( 'init_pegrid', 'PA0238', 1, 2, 0, 6, 0 )
[114]1199
1200          END SELECT
1201
1202       ENDDO
1203
1204    ENDIF
1205
[759]1206!
1207!-- Calculate the number of groups into which parallel I/O is split.
1208!-- The default for files which are opened by all PEs (or where each
1209!-- PE opens his own independent file) is, that all PEs are doing input/output
1210!-- in parallel at the same time. This might cause performance or even more
1211!-- severe problems depending on the configuration of the underlying file
1212!-- system.
1213!-- First, set the default:
1214    IF ( maximum_parallel_io_streams == -1  .OR. &
1215         maximum_parallel_io_streams > numprocs )  THEN
1216       maximum_parallel_io_streams = numprocs
1217    ENDIF
1218
1219!
1220!-- Now calculate the number of io_blocks and the io_group to which the
1221!-- respective PE belongs. I/O of the groups is done in serial, but in parallel
1222!-- for all PEs belonging to the same group. A preliminary setting with myid
1223!-- based on MPI_COMM_WORLD has been done in parin.
1224    io_blocks = numprocs / maximum_parallel_io_streams
1225    io_group  = MOD( myid+1, io_blocks )
1226   
1227
[1]1228 END SUBROUTINE init_pegrid
Note: See TracBrowser for help on using the repository browser.