source: palm/trunk/SOURCE/init_pegrid.f90 @ 1184

Last change on this file since 1184 was 1160, checked in by fricke, 12 years ago

last commit documented

  • Property svn:keywords set to Id
File size: 41.4 KB
RevLine 
[1]1 SUBROUTINE init_pegrid
[1036]2
3!--------------------------------------------------------------------------------!
4! This file is part of PALM.
5!
6! PALM is free software: you can redistribute it and/or modify it under the terms
7! of the GNU General Public License as published by the Free Software Foundation,
8! either version 3 of the License, or (at your option) any later version.
9!
10! PALM is distributed in the hope that it will be useful, but WITHOUT ANY
11! WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
12! A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
13!
14! You should have received a copy of the GNU General Public License along with
15! PALM. If not, see <http://www.gnu.org/licenses/>.
16!
17! Copyright 1997-2012  Leibniz University Hannover
18!--------------------------------------------------------------------------------!
19!
[254]20! Current revisions:
[1]21! -----------------
[760]22!
[1140]23!
[668]24! Former revisions:
25! -----------------
26! $Id: init_pegrid.f90 1160 2013-05-21 12:06:24Z heinze $
27!
[1160]28! 1159 2013-05-21 11:58:22Z fricke
29! dirichlet/neumann and neumann/dirichlet removed
30!
[1140]31! 1139 2013-04-18 07:25:03Z raasch
32! bugfix for calculating the id of the PE carrying the recycling plane
33!
[1112]34! 1111 2013-03-08 23:54:10Z raasch
35! initialization of poisfft moved to module poisfft
36!
[1093]37! 1092 2013-02-02 11:24:22Z raasch
38! unused variables removed
39!
[1057]40! 1056 2012-11-16 15:28:04Z raasch
41! Indices for arrays n.._mg start from zero due to definition of arrays f2 and
42! p2 as automatic arrays in recursive subroutine next_mg_level
43!
[1042]44! 1041 2012-11-06 02:36:29Z raasch
45! a 2d virtual processor topology is used by default for all machines
46!
[1037]47! 1036 2012-10-22 13:43:42Z raasch
48! code put under GPL (PALM 3.9)
49!
[1004]50! 1003 2012-09-14 14:35:53Z raasch
51! subdomains must have identical size (grid matching = "match" removed)
52!
[1002]53! 1001 2012-09-13 14:08:46Z raasch
54! all actions concerning upstream-spline-method removed
55!
[979]56! 978 2012-08-09 08:28:32Z fricke
57! dirichlet/neumann and neumann/dirichlet added
58! nxlu and nysv are also calculated for inflow boundary
59!
[810]60! 809 2012-01-30 13:32:58Z maronga
61! Bugfix: replaced .AND. and .NOT. with && and ! in the preprocessor directives
62!
[808]63! 807 2012-01-25 11:53:51Z maronga
64! New cpp directive "__check" implemented which is used by check_namelist_files
65!
[781]66! 780 2011-11-10 07:16:47Z raasch
67! Bugfix for rev 778: Misplaced error message moved to the rigth place
68!
[779]69! 778 2011-11-07 14:18:25Z fricke
70! Calculation of subdomain_size now considers the number of ghost points.
71! Further coarsening on PE0 is now possible for multigrid solver if the
72! collected field has more grid points than the subdomain of an PE.
73!
[760]74! 759 2011-09-15 13:58:31Z raasch
75! calculation of number of io_blocks and the io_group to which the respective
76! PE belongs
77!
[756]78! 755 2011-08-29 09:55:16Z witha
79! 2d-decomposition is default for lcflow (ForWind cluster in Oldenburg)
80!
[723]81! 722 2011-04-11 06:21:09Z raasch
82! Bugfix: bc_lr/ns_cyc/dirrad/raddir replaced by bc_lr/ns, because variables
83!         are not yet set here; grid_level set to 0
84!
[710]85! 709 2011-03-30 09:31:40Z raasch
86! formatting adjustments
87!
[708]88! 707 2011-03-29 11:39:40Z raasch
89! bc_lr/ns replaced by bc_lr/ns_cyc/dirrad/raddir
90!
[668]91! 667 2010-12-23 12:06:00Z suehring/gryschka
[667]92! Moved determination of target_id's from init_coupling
[669]93! Determination of parameters needed for coupling (coupling_topology, ngp_a,
94! ngp_o) with different grid/processor-topology in ocean and atmosphere
[667]95! Adaption of ngp_xy, ngp_y to a dynamic number of ghost points.
96! The maximum_grid_level changed from 1 to 0. 0 is the normal grid, 1 to
97! maximum_grid_level the grids for multigrid, in which 0 and 1 are normal grids.
98! This distinction is due to reasons of data exchange and performance for the
99! normal grid and grids in poismg.
100! The definition of MPI-Vectors adapted to a dynamic numer of ghost points.
101! New MPI-Vectors for data exchange between left and right boundaries added.
102! This is due to reasons of performance (10% faster).
[77]103!
[647]104! 646 2010-12-15 13:03:52Z raasch
105! lctit is now using a 2d decomposition by default
106!
[623]107! 622 2010-12-10 08:08:13Z raasch
108! optional barriers included in order to speed up collective operations
109!
[482]110! 438 2010-02-01 04:32:43Z raasch
111! 2d-decomposition is default for Cray-XT machines
[77]112!
[392]113! 274 2009-03-26 15:11:21Z heinze
114! Output of messages replaced by message handling routine.
115!
[226]116! 206 2008-10-13 14:59:11Z raasch
117! Implementation of a MPI-1 coupling: added __parallel within the __mpi2 part
118! 2d-decomposition is default on SGI-ICE systems
119!
[198]120! 197 2008-09-16 15:29:03Z raasch
121! multigrid levels are limited by subdomains if mg_switch_to_pe0_level = -1,
122! nz is used instead nnz for calculating mg-levels
123! Collect on PE0 horizontal index bounds from all other PEs,
124! broadcast the id of the inflow PE (using the respective communicator)
125!
[139]126! 114 2007-10-10 00:03:15Z raasch
127! Allocation of wall flag arrays for multigrid solver
128!
[110]129! 108 2007-08-24 15:10:38Z letzel
130! Intercommunicator (comm_inter) and derived data type (type_xy) for
131! coupled model runs created, assign coupling_mode_remote,
132! indices nxlu and nysv are calculated (needed for non-cyclic boundary
133! conditions)
134!
[83]135! 82 2007-04-16 15:40:52Z raasch
136! Cpp-directive lcmuk changed to intel_openmp_bug, setting of host on lcmuk by
137! cpp-directive removed
138!
[77]139! 75 2007-03-22 09:54:05Z raasch
[73]140! uxrp, vynp eliminated,
[75]141! dirichlet/neumann changed to dirichlet/radiation, etc.,
142! poisfft_init is only called if fft-solver is switched on
[1]143!
[3]144! RCS Log replace by Id keyword, revision history cleaned up
145!
[1]146! Revision 1.28  2006/04/26 13:23:32  raasch
147! lcmuk does not understand the !$ comment so a cpp-directive is required
148!
149! Revision 1.1  1997/07/24 11:15:09  raasch
150! Initial revision
151!
152!
153! Description:
154! ------------
155! Determination of the virtual processor topology (if not prescribed by the
156! user)and computation of the grid point number and array bounds of the local
157! domains.
158!------------------------------------------------------------------------------!
159
160    USE control_parameters
[163]161    USE grid_variables
[1]162    USE indices
163    USE pegrid
164    USE statistics
165    USE transpose_indices
166
167
[667]168
[1]169    IMPLICIT NONE
170
[778]171    INTEGER ::  i, id_inflow_l, id_recycling_l, ind(5), j, k,                &
[151]172                maximum_grid_level_l, mg_switch_to_pe0_level_l, mg_levels_x, &
173                mg_levels_y, mg_levels_z, nnx_y, nnx_z, nny_x, nny_z, nnz_x, &
[1092]174                nnz_y, numproc_sqr, nxl_l, nxr_l, nyn_l, nys_l,    &
[778]175                nzb_l, nzt_l, omp_get_num_threads
[1]176
177    INTEGER, DIMENSION(:), ALLOCATABLE ::  ind_all, nxlf, nxrf, nynf, nysf
178
[667]179    INTEGER, DIMENSION(2) :: pdims_remote
180
[1092]181#if defined( __mpi2 )
[1]182    LOGICAL ::  found
[1092]183#endif
[1]184
185!
186!-- Get the number of OpenMP threads
187    !$OMP PARALLEL
[82]188#if defined( __intel_openmp_bug )
[1]189    threads_per_task = omp_get_num_threads()
190#else
191!$  threads_per_task = omp_get_num_threads()
192#endif
193    !$OMP END PARALLEL
194
195
196#if defined( __parallel )
[667]197
[1]198!
199!-- Determine the processor topology or check it, if prescribed by the user
200    IF ( npex == -1  .AND.  npey == -1 )  THEN
201
202!
203!--    Automatic determination of the topology
[1041]204       numproc_sqr = SQRT( REAL( numprocs ) )
205       pdims(1)    = MAX( numproc_sqr , 1 )
206       DO  WHILE ( MOD( numprocs , pdims(1) ) /= 0 )
207          pdims(1) = pdims(1) - 1
208       ENDDO
209       pdims(2) = numprocs / pdims(1)
[1]210
211    ELSEIF ( npex /= -1  .AND.  npey /= -1 )  THEN
212
213!
214!--    Prescribed by user. Number of processors on the prescribed topology
215!--    must be equal to the number of PEs available to the job
216       IF ( ( npex * npey ) /= numprocs )  THEN
[274]217          WRITE( message_string, * ) 'number of PEs of the prescribed ',      & 
218                 'topology (', npex*npey,') does not match & the number of ', & 
219                 'PEs available to the job (', numprocs, ')'
[254]220          CALL message( 'init_pegrid', 'PA0221', 1, 2, 0, 6, 0 )
[1]221       ENDIF
222       pdims(1) = npex
223       pdims(2) = npey
224
225    ELSE
226!
227!--    If the processor topology is prescribed by the user, the number of
228!--    PEs must be given in both directions
[274]229       message_string = 'if the processor topology is prescribed by the, ' //  &
230                   ' user& both values of "npex" and "npey" must be given ' // &
231                   'in the &NAMELIST-parameter file'
[254]232       CALL message( 'init_pegrid', 'PA0222', 1, 2, 0, 6, 0 )
[1]233
234    ENDIF
235
236!
237!-- The hybrid solver can only be used in case of a 1d-decomposition along x
238    IF ( pdims(2) /= 1  .AND.  psolver == 'poisfft_hybrid' )  THEN
[254]239       message_string = 'psolver = "poisfft_hybrid" can only be' // &
240                        '& used in case of a 1d-decomposition along x'
241       CALL message( 'init_pegrid', 'PA0223', 1, 2, 0, 6, 0 )
[1]242    ENDIF
243
244!
[622]245!-- For communication speedup, set barriers in front of collective
246!-- communications by default on SGI-type systems
247    IF ( host(3:5) == 'sgi' )  collective_wait = .TRUE.
248
249!
[1]250!-- If necessary, set horizontal boundary conditions to non-cyclic
[722]251    IF ( bc_lr /= 'cyclic' )  cyclic(1) = .FALSE.
252    IF ( bc_ns /= 'cyclic' )  cyclic(2) = .FALSE.
[1]253
[807]254
[809]255#if ! defined( __check)
[1]256!
257!-- Create the virtual processor grid
258    CALL MPI_CART_CREATE( comm_palm, ndim, pdims, cyclic, reorder, &
259                          comm2d, ierr )
260    CALL MPI_COMM_RANK( comm2d, myid, ierr )
261    WRITE (myid_char,'(''_'',I4.4)')  myid
262
263    CALL MPI_CART_COORDS( comm2d, myid, ndim, pcoord, ierr )
264    CALL MPI_CART_SHIFT( comm2d, 0, 1, pleft, pright, ierr )
265    CALL MPI_CART_SHIFT( comm2d, 1, 1, psouth, pnorth, ierr )
266
267!
268!-- Determine sub-topologies for transpositions
269!-- Transposition from z to x:
270    remain_dims(1) = .TRUE.
271    remain_dims(2) = .FALSE.
272    CALL MPI_CART_SUB( comm2d, remain_dims, comm1dx, ierr )
273    CALL MPI_COMM_RANK( comm1dx, myidx, ierr )
274!
275!-- Transposition from x to y
276    remain_dims(1) = .FALSE.
277    remain_dims(2) = .TRUE.
278    CALL MPI_CART_SUB( comm2d, remain_dims, comm1dy, ierr )
279    CALL MPI_COMM_RANK( comm1dy, myidy, ierr )
280
[807]281#endif
[1]282
283!
[1003]284!-- Calculate array bounds along x-direction for every PE.
[1]285    ALLOCATE( nxlf(0:pdims(1)-1), nxrf(0:pdims(1)-1), nynf(0:pdims(2)-1), &
[1003]286              nysf(0:pdims(2)-1) )
[1]287
[1003]288    IF ( MOD( nx+1 , pdims(1) ) /= 0 )  THEN
[274]289       WRITE( message_string, * ) 'x-direction: gridpoint number (',nx+1,') ',&
290                               'is not an& integral divisor of the number ',  &
291                               'processors (', pdims(1),')'
[254]292       CALL message( 'init_pegrid', 'PA0225', 1, 2, 0, 6, 0 )
[1]293    ELSE
[1003]294       nnx  = ( nx + 1 ) / pdims(1)
[1]295       IF ( nnx*pdims(1) - ( nx + 1) > nnx )  THEN
[274]296          WRITE( message_string, * ) 'x-direction: nx does not match the',    & 
297                       'requirements given by the number of PEs &used',       &
298                       '& please use nx = ', nx - ( pdims(1) - ( nnx*pdims(1) &
299                                   - ( nx + 1 ) ) ), ' instead of nx =', nx
[254]300          CALL message( 'init_pegrid', 'PA0226', 1, 2, 0, 6, 0 )
[1]301       ENDIF
302    ENDIF   
303
304!
305!-- Left and right array bounds, number of gridpoints
306    DO  i = 0, pdims(1)-1
307       nxlf(i)   = i * nnx
308       nxrf(i)   = ( i + 1 ) * nnx - 1
309    ENDDO
310
311!
312!-- Calculate array bounds in y-direction for every PE.
[1003]313    IF ( MOD( ny+1 , pdims(2) ) /= 0 )  THEN
[274]314       WRITE( message_string, * ) 'y-direction: gridpoint number (',ny+1,') ', &
315                           'is not an& integral divisor of the number of',     &
316                           'processors (', pdims(2),')'
[254]317       CALL message( 'init_pegrid', 'PA0227', 1, 2, 0, 6, 0 )
[1]318    ELSE
[1003]319       nny  = ( ny + 1 ) / pdims(2)
[1]320       IF ( nny*pdims(2) - ( ny + 1) > nny )  THEN
[274]321          WRITE( message_string, * ) 'y-direction: ny does not match the',    &
322                       'requirements given by the number of PEs &used ',      &
323                       '& please use ny = ', ny - ( pdims(2) - ( nnx*pdims(2) &
[254]324                                     - ( ny + 1 ) ) ), ' instead of ny =', ny
325          CALL message( 'init_pegrid', 'PA0228', 1, 2, 0, 6, 0 )
[1]326       ENDIF
327    ENDIF   
328
329!
330!-- South and north array bounds
331    DO  j = 0, pdims(2)-1
332       nysf(j)   = j * nny
333       nynf(j)   = ( j + 1 ) * nny - 1
334    ENDDO
335
336!
337!-- Local array bounds of the respective PEs
[1003]338    nxl = nxlf(pcoord(1))
339    nxr = nxrf(pcoord(1))
340    nys = nysf(pcoord(2))
341    nyn = nynf(pcoord(2))
342    nzb = 0
343    nzt = nz
344    nnz = nz
[1]345
346!
[707]347!-- Set switches to define if the PE is situated at the border of the virtual
348!-- processor grid
349    IF ( nxl == 0 )   left_border_pe  = .TRUE.
350    IF ( nxr == nx )  right_border_pe = .TRUE.
351    IF ( nys == 0 )   south_border_pe = .TRUE.
352    IF ( nyn == ny )  north_border_pe = .TRUE.
353
354!
[1]355!-- Calculate array bounds and gridpoint numbers for the transposed arrays
356!-- (needed in the pressure solver)
357!-- For the transposed arrays, cyclic boundaries as well as top and bottom
358!-- boundaries are omitted, because they are obstructive to the transposition
359
360!
361!-- 1. transposition  z --> x
[1001]362!-- This transposition is not neccessary in case of a 1d-decomposition along x
363    IF ( pdims(2) /= 1 )  THEN
[1]364
[1003]365       nys_x = nys
366       nyn_x = nyn
367       nny_x = nny
368       IF ( MOD( nz , pdims(1) ) /= 0 )  THEN
[274]369          WRITE( message_string, * ) 'transposition z --> x:',                &
370                       '&nz=',nz,' is not an integral divisior of pdims(1)=', &
371                                                                   pdims(1)
[254]372          CALL message( 'init_pegrid', 'PA0230', 1, 2, 0, 6, 0 )
[1]373       ENDIF
[1003]374       nnz_x = nz / pdims(1)
375       nzb_x = 1 + myidx * nnz_x
376       nzt_x = ( myidx + 1 ) * nnz_x
[1]377       sendrecvcount_zx = nnx * nny * nnz_x
378
[181]379    ELSE
380!
381!---   Setting of dummy values because otherwise variables are undefined in
382!---   the next step  x --> y
383!---   WARNING: This case has still to be clarified!!!!!!!!!!!!
[1003]384       nnz_x = 1
385       nzb_x = 1
386       nzt_x = 1
387       nny_x = nny
[181]388
[1]389    ENDIF
390
391!
392!-- 2. transposition  x --> y
[1003]393    nnz_y = nnz_x
394    nzb_y = nzb_x
395    nzt_y = nzt_x
396    IF ( MOD( nx+1 , pdims(2) ) /= 0 )  THEN
[274]397       WRITE( message_string, * ) 'transposition x --> y:',                &
398                         '&nx+1=',nx+1,' is not an integral divisor of ',&
399                         'pdims(2)=',pdims(2)
[254]400       CALL message( 'init_pegrid', 'PA0231', 1, 2, 0, 6, 0 )
[1]401    ENDIF
[1003]402    nnx_y = (nx+1) / pdims(2)
[1]403    nxl_y = myidy * nnx_y
[1003]404    nxr_y = ( myidy + 1 ) * nnx_y - 1
[1]405    sendrecvcount_xy = nnx_y * nny_x * nnz_y
406
407!
408!-- 3. transposition  y --> z  (ELSE:  x --> y  in case of 1D-decomposition
409!-- along x)
[1001]410    IF ( pdims(2) /= 1 )  THEN
[1]411!
412!--    y --> z
413!--    This transposition is not neccessary in case of a 1d-decomposition
414!--    along x, except that the uptream-spline method is switched on
[1003]415       nnx_z = nnx_y
416       nxl_z = nxl_y
417       nxr_z = nxr_y
418       IF ( MOD( ny+1 , pdims(1) ) /= 0 )  THEN
[274]419          WRITE( message_string, * ) 'transposition y --> z:',            &
420                            '& ny+1=',ny+1,' is not an integral divisor of ',&
421                            'pdims(1)=',pdims(1)
[254]422          CALL message( 'init_pegrid', 'PA0232', 1, 2, 0, 6, 0 )
[1]423       ENDIF
[1003]424       nny_z = (ny+1) / pdims(1)
425       nys_z = myidx * nny_z
426       nyn_z = ( myidx + 1 ) * nny_z - 1
[1]427       sendrecvcount_yz = nnx_y * nny_z * nnz_y
428
429    ELSE
430!
431!--    x --> y. This condition must be fulfilled for a 1D-decomposition along x
[1003]432       IF ( MOD( ny+1 , pdims(1) ) /= 0 )  THEN
[274]433          WRITE( message_string, * ) 'transposition x --> y:',               &
434                            '& ny+1=',ny+1,' is not an integral divisor of ',&
435                            'pdims(1)=',pdims(1)
[254]436          CALL message( 'init_pegrid', 'PA0233', 1, 2, 0, 6, 0 )
[1]437       ENDIF
438
439    ENDIF
440
441!
442!-- Indices for direct transpositions z --> y (used for calculating spectra)
443    IF ( dt_dosp /= 9999999.9 )  THEN
[1003]444       IF ( MOD( nz, pdims(2) ) /= 0 )  THEN
[274]445          WRITE( message_string, * ) 'direct transposition z --> y (needed ', &
446                    'for spectra):& nz=',nz,' is not an integral divisor of ',&
447                    'pdims(2)=',pdims(2)
[254]448          CALL message( 'init_pegrid', 'PA0234', 1, 2, 0, 6, 0 )
[1]449       ELSE
[1003]450          nxl_yd = nxl
451          nxr_yd = nxr
452          nzb_yd = 1 + myidy * ( nz / pdims(2) )
453          nzt_yd = ( myidy + 1 ) * ( nz / pdims(2) )
454          sendrecvcount_zyd = nnx * nny * ( nz / pdims(2) )
[1]455       ENDIF
456    ENDIF
457
458!
459!-- Indices for direct transpositions y --> x (they are only possible in case
460!-- of a 1d-decomposition along x)
461    IF ( pdims(2) == 1 )  THEN
[1003]462       nny_x = nny / pdims(1)
463       nys_x = myid * nny_x
464       nyn_x = ( myid + 1 ) * nny_x - 1
465       nzb_x = 1
466       nzt_x = nz
467       sendrecvcount_xy = nnx * nny_x * nz
[1]468    ENDIF
469
470!
471!-- Indices for direct transpositions x --> y (they are only possible in case
472!-- of a 1d-decomposition along y)
473    IF ( pdims(1) == 1 )  THEN
[1003]474       nnx_y = nnx / pdims(2)
475       nxl_y = myid * nnx_y
476       nxr_y = ( myid + 1 ) * nnx_y - 1
477       nzb_y = 1
478       nzt_y = nz
479       sendrecvcount_xy = nnx_y * nny * nz
[1]480    ENDIF
481
482!
483!-- Arrays for storing the array bounds are needed any more
484    DEALLOCATE( nxlf , nxrf , nynf , nysf )
485
[807]486
[809]487#if ! defined( __check)
[145]488!
489!-- Collect index bounds from other PEs (to be written to restart file later)
490    ALLOCATE( hor_index_bounds(4,0:numprocs-1) )
491
492    IF ( myid == 0 )  THEN
493
494       hor_index_bounds(1,0) = nxl
495       hor_index_bounds(2,0) = nxr
496       hor_index_bounds(3,0) = nys
497       hor_index_bounds(4,0) = nyn
498
499!
500!--    Receive data from all other PEs
501       DO  i = 1, numprocs-1
502          CALL MPI_RECV( ibuf, 4, MPI_INTEGER, i, MPI_ANY_TAG, comm2d, status, &
503                         ierr )
504          hor_index_bounds(:,i) = ibuf(1:4)
505       ENDDO
506
507    ELSE
508!
509!--    Send index bounds to PE0
510       ibuf(1) = nxl
511       ibuf(2) = nxr
512       ibuf(3) = nys
513       ibuf(4) = nyn
514       CALL MPI_SEND( ibuf, 4, MPI_INTEGER, 0, myid, comm2d, ierr )
515
516    ENDIF
517
[807]518#endif
519
[1]520#if defined( __print )
521!
522!-- Control output
523    IF ( myid == 0 )  THEN
524       PRINT*, '*** processor topology ***'
525       PRINT*, ' '
526       PRINT*, 'myid   pcoord    left right  south north  idx idy   nxl: nxr',&
527               &'   nys: nyn'
528       PRINT*, '------------------------------------------------------------',&
529               &'-----------'
530       WRITE (*,1000)  0, pcoord(1), pcoord(2), pleft, pright, psouth, pnorth, &
531                       myidx, myidy, nxl, nxr, nys, nyn
5321000   FORMAT (I4,2X,'(',I3,',',I3,')',3X,I4,2X,I4,3X,I4,2X,I4,2X,I3,1X,I3, &
533               2(2X,I4,':',I4))
534
535!
[108]536!--    Receive data from the other PEs
[1]537       DO  i = 1,numprocs-1
538          CALL MPI_RECV( ibuf, 12, MPI_INTEGER, i, MPI_ANY_TAG, comm2d, status, &
539                         ierr )
540          WRITE (*,1000)  i, ( ibuf(j) , j = 1,12 )
541       ENDDO
542    ELSE
543
544!
545!--    Send data to PE0
546       ibuf(1) = pcoord(1); ibuf(2) = pcoord(2); ibuf(3) = pleft
547       ibuf(4) = pright; ibuf(5) = psouth; ibuf(6) = pnorth; ibuf(7) = myidx
548       ibuf(8) = myidy; ibuf(9) = nxl; ibuf(10) = nxr; ibuf(11) = nys
549       ibuf(12) = nyn
550       CALL MPI_SEND( ibuf, 12, MPI_INTEGER, 0, myid, comm2d, ierr )       
551    ENDIF
552#endif
553
[809]554#if defined( __parallel ) && ! defined( __check)
[102]555#if defined( __mpi2 )
556!
557!-- In case of coupled runs, get the port name on PE0 of the atmosphere model
558!-- and pass it to PE0 of the ocean model
559    IF ( myid == 0 )  THEN
560
561       IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN
562
563          CALL MPI_OPEN_PORT( MPI_INFO_NULL, port_name, ierr )
[108]564
[102]565          CALL MPI_PUBLISH_NAME( 'palm_coupler', MPI_INFO_NULL, port_name, &
566                                 ierr )
[108]567
568!
[104]569!--       Write a flag file for the ocean model and the other atmosphere
570!--       processes.
571!--       There seems to be a bug in MPICH2 which causes hanging processes
572!--       in case that execution of LOOKUP_NAME is continued too early
573!--       (i.e. before the port has been created)
574          OPEN( 90, FILE='COUPLING_PORT_OPENED', FORM='FORMATTED' )
575          WRITE ( 90, '(''TRUE'')' )
576          CLOSE ( 90 )
[102]577
578       ELSEIF ( coupling_mode == 'ocean_to_atmosphere' )  THEN
579
[104]580!
581!--       Continue only if the atmosphere model has created the port.
582!--       There seems to be a bug in MPICH2 which causes hanging processes
583!--       in case that execution of LOOKUP_NAME is continued too early
584!--       (i.e. before the port has been created)
585          INQUIRE( FILE='COUPLING_PORT_OPENED', EXIST=found )
586          DO WHILE ( .NOT. found )
587             INQUIRE( FILE='COUPLING_PORT_OPENED', EXIST=found )
588          ENDDO
589
[102]590          CALL MPI_LOOKUP_NAME( 'palm_coupler', MPI_INFO_NULL, port_name, ierr )
591
592       ENDIF
593
594    ENDIF
595
596!
597!-- In case of coupled runs, establish the connection between the atmosphere
598!-- and the ocean model and define the intercommunicator (comm_inter)
599    CALL MPI_BARRIER( comm2d, ierr )
600    IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN
601
602       CALL MPI_COMM_ACCEPT( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &
603                             comm_inter, ierr )
[108]604       coupling_mode_remote = 'ocean_to_atmosphere'
605
[102]606    ELSEIF ( coupling_mode == 'ocean_to_atmosphere' )  THEN
607
608       CALL MPI_COMM_CONNECT( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &
609                              comm_inter, ierr )
[108]610       coupling_mode_remote = 'atmosphere_to_ocean'
611
[102]612    ENDIF
[206]613#endif
[102]614
[667]615!
[709]616!-- Determine the number of ghost point layers
617    IF ( scalar_advec == 'ws-scheme' .OR. momentum_advec == 'ws-scheme' )  THEN
[667]618       nbgp = 3
619    ELSE
620       nbgp = 1
[709]621    ENDIF 
[667]622
[102]623!
[709]624!-- Create a new MPI derived datatype for the exchange of surface (xy) data,
625!-- which is needed for coupled atmosphere-ocean runs.
626!-- First, calculate number of grid points of an xy-plane.
[667]627    ngp_xy  = ( nxr - nxl + 1 + 2 * nbgp ) * ( nyn - nys + 1 + 2 * nbgp )
[102]628    CALL MPI_TYPE_VECTOR( ngp_xy, 1, nzt-nzb+2, MPI_REAL, type_xy, ierr )
629    CALL MPI_TYPE_COMMIT( type_xy, ierr )
[667]630
[709]631    IF ( TRIM( coupling_mode ) /= 'uncoupled' )  THEN
[667]632   
633!
634!--    Pass the number of grid points of the atmosphere model to
635!--    the ocean model and vice versa
636       IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN
637
638          nx_a = nx
639          ny_a = ny
640
[709]641          IF ( myid == 0 )  THEN
642
643             CALL MPI_SEND( nx_a, 1, MPI_INTEGER, numprocs, 1, comm_inter,  &
644                            ierr )
645             CALL MPI_SEND( ny_a, 1, MPI_INTEGER, numprocs, 2, comm_inter,  &
646                            ierr )
647             CALL MPI_SEND( pdims, 2, MPI_INTEGER, numprocs, 3, comm_inter, &
648                            ierr )
649             CALL MPI_RECV( nx_o, 1, MPI_INTEGER, numprocs, 4, comm_inter,  &
650                            status, ierr )
651             CALL MPI_RECV( ny_o, 1, MPI_INTEGER, numprocs, 5, comm_inter,  &
652                            status, ierr )
653             CALL MPI_RECV( pdims_remote, 2, MPI_INTEGER, numprocs, 6,      &
[667]654                            comm_inter, status, ierr )
655          ENDIF
656
[709]657          CALL MPI_BCAST( nx_o, 1, MPI_INTEGER, 0, comm2d, ierr )
658          CALL MPI_BCAST( ny_o, 1, MPI_INTEGER, 0, comm2d, ierr ) 
659          CALL MPI_BCAST( pdims_remote, 2, MPI_INTEGER, 0, comm2d, ierr )
[667]660       
661       ELSEIF ( coupling_mode == 'ocean_to_atmosphere' )  THEN
662
663          nx_o = nx
664          ny_o = ny 
665
666          IF ( myid == 0 ) THEN
[709]667
668             CALL MPI_RECV( nx_a, 1, MPI_INTEGER, 0, 1, comm_inter, status, &
669                            ierr )
670             CALL MPI_RECV( ny_a, 1, MPI_INTEGER, 0, 2, comm_inter, status, &
671                            ierr )
672             CALL MPI_RECV( pdims_remote, 2, MPI_INTEGER, 0, 3, comm_inter, &
673                            status, ierr )
674             CALL MPI_SEND( nx_o, 1, MPI_INTEGER, 0, 4, comm_inter, ierr )
675             CALL MPI_SEND( ny_o, 1, MPI_INTEGER, 0, 5, comm_inter, ierr )
676             CALL MPI_SEND( pdims, 2, MPI_INTEGER, 0, 6, comm_inter, ierr )
[667]677          ENDIF
678
679          CALL MPI_BCAST( nx_a, 1, MPI_INTEGER, 0, comm2d, ierr)
680          CALL MPI_BCAST( ny_a, 1, MPI_INTEGER, 0, comm2d, ierr) 
681          CALL MPI_BCAST( pdims_remote, 2, MPI_INTEGER, 0, comm2d, ierr) 
682
683       ENDIF
684 
[709]685       ngp_a = ( nx_a+1 + 2 * nbgp ) * ( ny_a+1 + 2 * nbgp )
686       ngp_o = ( nx_o+1 + 2 * nbgp ) * ( ny_o+1 + 2 * nbgp )
[667]687
688!
[709]689!--    Determine if the horizontal grid and the number of PEs in ocean and
690!--    atmosphere is same or not
691       IF ( nx_o == nx_a  .AND.  ny_o == ny_a  .AND.  &
[667]692            pdims(1) == pdims_remote(1) .AND. pdims(2) == pdims_remote(2) ) &
693       THEN
694          coupling_topology = 0
695       ELSE
696          coupling_topology = 1
697       ENDIF 
698
699!
700!--    Determine the target PEs for the exchange between ocean and
701!--    atmosphere (comm2d)
[709]702       IF ( coupling_topology == 0 )  THEN
703!
704!--       In case of identical topologies, every atmosphere PE has exactly one
705!--       ocean PE counterpart and vice versa
706          IF ( TRIM( coupling_mode ) == 'atmosphere_to_ocean' ) THEN
[667]707             target_id = myid + numprocs
708          ELSE
709             target_id = myid 
710          ENDIF
711
712       ELSE
713!
714!--       In case of nonequivalent topology in ocean and atmosphere only for
715!--       PE0 in ocean and PE0 in atmosphere a target_id is needed, since
[709]716!--       data echxchange between ocean and atmosphere will be done only
717!--       between these PEs.   
718          IF ( myid == 0 )  THEN
719
720             IF ( TRIM( coupling_mode ) == 'atmosphere_to_ocean' )  THEN
[667]721                target_id = numprocs 
722             ELSE
723                target_id = 0
724             ENDIF
[709]725
[667]726          ENDIF
[709]727
[667]728       ENDIF
729
730    ENDIF
731
732
[102]733#endif
734
[1]735#else
736
737!
738!-- Array bounds when running on a single PE (respectively a non-parallel
739!-- machine)
[1003]740    nxl = 0
741    nxr = nx
742    nnx = nxr - nxl + 1
743    nys = 0
744    nyn = ny
745    nny = nyn - nys + 1
746    nzb = 0
747    nzt = nz
748    nnz = nz
[1]749
[145]750    ALLOCATE( hor_index_bounds(4,0:0) )
751    hor_index_bounds(1,0) = nxl
752    hor_index_bounds(2,0) = nxr
753    hor_index_bounds(3,0) = nys
754    hor_index_bounds(4,0) = nyn
755
[1]756!
757!-- Array bounds for the pressure solver (in the parallel code, these bounds
758!-- are the ones for the transposed arrays)
[1003]759    nys_x = nys
760    nyn_x = nyn
761    nzb_x = nzb + 1
762    nzt_x = nzt
[1]763
[1003]764    nxl_y = nxl
765    nxr_y = nxr
766    nzb_y = nzb + 1
767    nzt_y = nzt
[1]768
[1003]769    nxl_z = nxl
770    nxr_z = nxr
771    nys_z = nys
772    nyn_z = nyn
[1]773
774#endif
775
776!
777!-- Calculate number of grid levels necessary for the multigrid poisson solver
778!-- as well as the gridpoint indices on each level
779    IF ( psolver == 'multigrid' )  THEN
780
781!
782!--    First calculate number of possible grid levels for the subdomains
783       mg_levels_x = 1
784       mg_levels_y = 1
785       mg_levels_z = 1
786
787       i = nnx
788       DO WHILE ( MOD( i, 2 ) == 0  .AND.  i /= 2 )
789          i = i / 2
790          mg_levels_x = mg_levels_x + 1
791       ENDDO
792
793       j = nny
794       DO WHILE ( MOD( j, 2 ) == 0  .AND.  j /= 2 )
795          j = j / 2
796          mg_levels_y = mg_levels_y + 1
797       ENDDO
798
[181]799       k = nz    ! do not use nnz because it might be > nz due to transposition
800                 ! requirements
[1]801       DO WHILE ( MOD( k, 2 ) == 0  .AND.  k /= 2 )
802          k = k / 2
803          mg_levels_z = mg_levels_z + 1
804       ENDDO
805
806       maximum_grid_level = MIN( mg_levels_x, mg_levels_y, mg_levels_z )
807
808!
809!--    Find out, if the total domain allows more levels. These additional
[709]810!--    levels are identically processed on all PEs.
[197]811       IF ( numprocs > 1  .AND.  mg_switch_to_pe0_level /= -1 )  THEN
[709]812
[1]813          IF ( mg_levels_z > MIN( mg_levels_x, mg_levels_y ) )  THEN
[709]814
[1]815             mg_switch_to_pe0_level_l = maximum_grid_level
816
817             mg_levels_x = 1
818             mg_levels_y = 1
819
820             i = nx+1
821             DO WHILE ( MOD( i, 2 ) == 0  .AND.  i /= 2 )
822                i = i / 2
823                mg_levels_x = mg_levels_x + 1
824             ENDDO
825
826             j = ny+1
827             DO WHILE ( MOD( j, 2 ) == 0  .AND.  j /= 2 )
828                j = j / 2
829                mg_levels_y = mg_levels_y + 1
830             ENDDO
831
832             maximum_grid_level_l = MIN( mg_levels_x, mg_levels_y, mg_levels_z )
833
834             IF ( maximum_grid_level_l > mg_switch_to_pe0_level_l )  THEN
835                mg_switch_to_pe0_level_l = maximum_grid_level_l - &
836                                           mg_switch_to_pe0_level_l + 1
837             ELSE
838                mg_switch_to_pe0_level_l = 0
839             ENDIF
[709]840
[1]841          ELSE
842             mg_switch_to_pe0_level_l = 0
843             maximum_grid_level_l = maximum_grid_level
[709]844
[1]845          ENDIF
846
847!
848!--       Use switch level calculated above only if it is not pre-defined
849!--       by user
850          IF ( mg_switch_to_pe0_level == 0 )  THEN
851             IF ( mg_switch_to_pe0_level_l /= 0 )  THEN
852                mg_switch_to_pe0_level = mg_switch_to_pe0_level_l
853                maximum_grid_level     = maximum_grid_level_l
854             ENDIF
855
856          ELSE
857!
858!--          Check pre-defined value and reset to default, if neccessary
859             IF ( mg_switch_to_pe0_level < mg_switch_to_pe0_level_l  .OR.  &
860                  mg_switch_to_pe0_level >= maximum_grid_level_l )  THEN
[254]861                message_string = 'mg_switch_to_pe0_level ' // &
862                                 'out of range and reset to default (=0)'
863                CALL message( 'init_pegrid', 'PA0235', 0, 1, 0, 6, 0 )
[1]864                mg_switch_to_pe0_level = 0
865             ELSE
866!
867!--             Use the largest number of possible levels anyway and recalculate
868!--             the switch level to this largest number of possible values
869                maximum_grid_level = maximum_grid_level_l
870
871             ENDIF
[709]872
[1]873          ENDIF
874
875       ENDIF
876
[1056]877       ALLOCATE( grid_level_count(maximum_grid_level),                       &
878                 nxl_mg(0:maximum_grid_level), nxr_mg(0:maximum_grid_level), &
879                 nyn_mg(0:maximum_grid_level), nys_mg(0:maximum_grid_level), &
880                 nzt_mg(0:maximum_grid_level) )
[1]881
882       grid_level_count = 0
[1056]883!
884!--    Index zero required as dummy due to definition of arrays f2 and p2 in
885!--    recursive subroutine next_mg_level
886       nxl_mg(0) = 0; nxr_mg(0) = 0; nyn_mg(0) = 0; nys_mg(0) = 0; nzt_mg(0) = 0
[778]887
[1]888       nxl_l = nxl; nxr_l = nxr; nys_l = nys; nyn_l = nyn; nzt_l = nzt
889
890       DO  i = maximum_grid_level, 1 , -1
891
892          IF ( i == mg_switch_to_pe0_level )  THEN
[809]893#if defined( __parallel ) && ! defined( __check )
[1]894!
895!--          Save the grid size of the subdomain at the switch level, because
896!--          it is needed in poismg.
897             ind(1) = nxl_l; ind(2) = nxr_l
898             ind(3) = nys_l; ind(4) = nyn_l
899             ind(5) = nzt_l
900             ALLOCATE( ind_all(5*numprocs), mg_loc_ind(5,0:numprocs-1) )
901             CALL MPI_ALLGATHER( ind, 5, MPI_INTEGER, ind_all, 5, &
902                                 MPI_INTEGER, comm2d, ierr )
903             DO  j = 0, numprocs-1
904                DO  k = 1, 5
905                   mg_loc_ind(k,j) = ind_all(k+j*5)
906                ENDDO
907             ENDDO
908             DEALLOCATE( ind_all )
909!
[709]910!--          Calculate the grid size of the total domain
[1]911             nxr_l = ( nxr_l-nxl_l+1 ) * pdims(1) - 1
912             nxl_l = 0
913             nyn_l = ( nyn_l-nys_l+1 ) * pdims(2) - 1
914             nys_l = 0
915!
916!--          The size of this gathered array must not be larger than the
917!--          array tend, which is used in the multigrid scheme as a temporary
[778]918!--          array. Therefore the subdomain size of an PE is calculated and
919!--          the size of the gathered grid. These values are used in 
920!--          routines pres and poismg
921             subdomain_size = ( nxr - nxl + 2 * nbgp + 1 ) * &
922                              ( nyn - nys + 2 * nbgp + 1 ) * ( nzt - nzb + 2 )
[1]923             gathered_size  = ( nxr_l - nxl_l + 3 ) * ( nyn_l - nys_l + 3 ) * &
924                              ( nzt_l - nzb + 2 )
925
[809]926#elif ! defined ( __parallel )
[254]927             message_string = 'multigrid gather/scatter impossible ' // &
[1]928                          'in non parallel mode'
[254]929             CALL message( 'init_pegrid', 'PA0237', 1, 2, 0, 6, 0 )
[1]930#endif
931          ENDIF
932
933          nxl_mg(i) = nxl_l
934          nxr_mg(i) = nxr_l
935          nys_mg(i) = nys_l
936          nyn_mg(i) = nyn_l
937          nzt_mg(i) = nzt_l
938
939          nxl_l = nxl_l / 2 
940          nxr_l = nxr_l / 2
941          nys_l = nys_l / 2 
942          nyn_l = nyn_l / 2 
943          nzt_l = nzt_l / 2 
[778]944
[1]945       ENDDO
946
[780]947!
948!--    Temporary problem: Currently calculation of maxerror iin routine poismg crashes
949!--    if grid data are collected on PE0 already on the finest grid level.
950!--    To be solved later.
951       IF ( maximum_grid_level == mg_switch_to_pe0_level )  THEN
952          message_string = 'grid coarsening on subdomain level cannot be performed'
953          CALL message( 'poismg', 'PA0236', 1, 2, 0, 6, 0 )
954       ENDIF
955
[1]956    ELSE
957
[667]958       maximum_grid_level = 0
[1]959
960    ENDIF
961
[722]962!
963!-- Default level 0 tells exchange_horiz that all ghost planes have to be
964!-- exchanged. grid_level is adjusted in poismg, where only one ghost plane
965!-- is required.
966    grid_level = 0
[1]967
[809]968#if defined( __parallel ) && ! defined ( __check )
[1]969!
970!-- Gridpoint number for the exchange of ghost points (y-line for 2D-arrays)
[667]971    ngp_y  = nyn - nys + 1 + 2 * nbgp
[1]972
973!
[709]974!-- Define new MPI derived datatypes for the exchange of ghost points in
975!-- x- and y-direction for 2D-arrays (line)
976    CALL MPI_TYPE_VECTOR( nxr-nxl+1+2*nbgp, nbgp, ngp_y, MPI_REAL, type_x, &
977                          ierr )
[1]978    CALL MPI_TYPE_COMMIT( type_x, ierr )
[709]979    CALL MPI_TYPE_VECTOR( nxr-nxl+1+2*nbgp, nbgp, ngp_y, MPI_INTEGER, &
980                          type_x_int, ierr )
[1]981    CALL MPI_TYPE_COMMIT( type_x_int, ierr )
982
[667]983    CALL MPI_TYPE_VECTOR( nbgp, ngp_y, ngp_y, MPI_REAL, type_y, ierr )
984    CALL MPI_TYPE_COMMIT( type_y, ierr )
985    CALL MPI_TYPE_VECTOR( nbgp, ngp_y, ngp_y, MPI_INTEGER, type_y_int, ierr )
986    CALL MPI_TYPE_COMMIT( type_y_int, ierr )
987
988
[1]989!
990!-- Calculate gridpoint numbers for the exchange of ghost points along x
991!-- (yz-plane for 3D-arrays) and define MPI derived data type(s) for the
992!-- exchange of ghost points in y-direction (xz-plane).
993!-- Do these calculations for the model grid and (if necessary) also
994!-- for the coarser grid levels used in the multigrid method
[667]995    ALLOCATE ( ngp_yz(0:maximum_grid_level), type_xz(0:maximum_grid_level),&
996               type_yz(0:maximum_grid_level) )
[1]997
998    nxl_l = nxl; nxr_l = nxr; nys_l = nys; nyn_l = nyn; nzb_l = nzb; nzt_l = nzt
[709]999
[667]1000!
1001!-- Discern between the model grid, which needs nbgp ghost points and
1002!-- grid levels for the multigrid scheme. In the latter case only one
1003!-- ghost point is necessary.
[709]1004!-- First definition of MPI-datatypes for exchange of ghost layers on normal
[667]1005!-- grid. The following loop is needed for data exchange in poismg.f90.
1006!
1007!-- Determine number of grid points of yz-layer for exchange
1008    ngp_yz(0) = (nzt - nzb + 2) * (nyn - nys + 1 + 2 * nbgp)
[709]1009
[667]1010!
[709]1011!-- Define an MPI-datatype for the exchange of left/right boundaries.
1012!-- Although data are contiguous in physical memory (which does not
1013!-- necessarily require an MPI-derived datatype), the data exchange between
1014!-- left and right PE's using the MPI-derived type is 10% faster than without.
[667]1015    CALL MPI_TYPE_VECTOR( nxr-nxl+1+2*nbgp, nbgp*(nzt-nzb+2), ngp_yz(0), &
[709]1016                          MPI_REAL, type_xz(0), ierr )
[667]1017    CALL MPI_TYPE_COMMIT( type_xz(0), ierr )
[1]1018
[709]1019    CALL MPI_TYPE_VECTOR( nbgp, ngp_yz(0), ngp_yz(0), MPI_REAL, type_yz(0), &
1020                          ierr ) 
[667]1021    CALL MPI_TYPE_COMMIT( type_yz(0), ierr )
[709]1022
[667]1023!
[709]1024!-- Definition of MPI-datatypes for multigrid method (coarser level grids)
[667]1025    IF ( psolver == 'multigrid' )  THEN
1026!   
[709]1027!--    Definition of MPI-datatyoe as above, but only 1 ghost level is used
1028       DO  i = maximum_grid_level, 1 , -1
1029
[667]1030          ngp_yz(i) = (nzt_l - nzb_l + 2) * (nyn_l - nys_l + 3)
1031
1032          CALL MPI_TYPE_VECTOR( nxr_l-nxl_l+3, nzt_l-nzb_l+2, ngp_yz(i), &
[709]1033                                MPI_REAL, type_xz(i), ierr )
[667]1034          CALL MPI_TYPE_COMMIT( type_xz(i), ierr )
[1]1035
[709]1036          CALL MPI_TYPE_VECTOR( 1, ngp_yz(i), ngp_yz(i), MPI_REAL, type_yz(i), &
1037                                ierr )
[667]1038          CALL MPI_TYPE_COMMIT( type_yz(i), ierr )
1039
1040          nxl_l = nxl_l / 2
1041          nxr_l = nxr_l / 2
1042          nys_l = nys_l / 2
1043          nyn_l = nyn_l / 2
1044          nzt_l = nzt_l / 2
[709]1045
[667]1046       ENDDO
[709]1047
1048    ENDIF
[1]1049#endif
1050
[809]1051#if defined( __parallel ) && ! defined ( __check )
[1]1052!
1053!-- Setting of flags for inflow/outflow conditions in case of non-cyclic
[106]1054!-- horizontal boundary conditions.
[1]1055    IF ( pleft == MPI_PROC_NULL )  THEN
[1159]1056       IF ( bc_lr == 'dirichlet/radiation' )  THEN
[1]1057          inflow_l  = .TRUE.
[1159]1058       ELSEIF ( bc_lr == 'radiation/dirichlet' )  THEN
[1]1059          outflow_l = .TRUE.
1060       ENDIF
1061    ENDIF
1062
1063    IF ( pright == MPI_PROC_NULL )  THEN
[1159]1064       IF ( bc_lr == 'dirichlet/radiation' )  THEN
[1]1065          outflow_r = .TRUE.
[1159]1066       ELSEIF ( bc_lr == 'radiation/dirichlet' )  THEN
[1]1067          inflow_r  = .TRUE.
1068       ENDIF
1069    ENDIF
1070
1071    IF ( psouth == MPI_PROC_NULL )  THEN
[1159]1072       IF ( bc_ns == 'dirichlet/radiation' )  THEN
[1]1073          outflow_s = .TRUE.
[1159]1074       ELSEIF ( bc_ns == 'radiation/dirichlet' )  THEN
[1]1075          inflow_s  = .TRUE.
1076       ENDIF
1077    ENDIF
1078
1079    IF ( pnorth == MPI_PROC_NULL )  THEN
[1159]1080       IF ( bc_ns == 'dirichlet/radiation' )  THEN
[1]1081          inflow_n  = .TRUE.
[1159]1082       ELSEIF ( bc_ns == 'radiation/dirichlet' )  THEN
[1]1083          outflow_n = .TRUE.
1084       ENDIF
1085    ENDIF
1086
[151]1087!
1088!-- Broadcast the id of the inflow PE
1089    IF ( inflow_l )  THEN
[163]1090       id_inflow_l = myidx
[151]1091    ELSE
1092       id_inflow_l = 0
1093    ENDIF
[622]1094    IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
[151]1095    CALL MPI_ALLREDUCE( id_inflow_l, id_inflow, 1, MPI_INTEGER, MPI_SUM, &
1096                        comm1dx, ierr )
1097
[163]1098!
1099!-- Broadcast the id of the recycling plane
1100!-- WARNING: needs to be adjusted in case of inflows other than from left side!
[1139]1101    IF ( NINT( recycling_width / dx ) >= nxl  .AND. &
1102         NINT( recycling_width / dx ) <= nxr )  THEN
[163]1103       id_recycling_l = myidx
1104    ELSE
1105       id_recycling_l = 0
1106    ENDIF
[622]1107    IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
[163]1108    CALL MPI_ALLREDUCE( id_recycling_l, id_recycling, 1, MPI_INTEGER, MPI_SUM, &
1109                        comm1dx, ierr )
1110
[809]1111#elif ! defined ( __parallel )
[1159]1112    IF ( bc_lr == 'dirichlet/radiation' )  THEN
[1]1113       inflow_l  = .TRUE.
1114       outflow_r = .TRUE.
[1159]1115    ELSEIF ( bc_lr == 'radiation/dirichlet' )  THEN
[1]1116       outflow_l = .TRUE.
1117       inflow_r  = .TRUE.
1118    ENDIF
1119
[1159]1120    IF ( bc_ns == 'dirichlet/radiation' )  THEN
[1]1121       inflow_n  = .TRUE.
1122       outflow_s = .TRUE.
[1159]1123    ELSEIF ( bc_ns == 'radiation/dirichlet' )  THEN
[1]1124       outflow_n = .TRUE.
1125       inflow_s  = .TRUE.
1126    ENDIF
1127#endif
[807]1128
[106]1129!
[978]1130!-- At the inflow or outflow, u or v, respectively, have to be calculated for
1131!-- one more grid point.
1132    IF ( inflow_l .OR. outflow_l )  THEN
[106]1133       nxlu = nxl + 1
1134    ELSE
1135       nxlu = nxl
1136    ENDIF
[978]1137    IF ( inflow_s .OR. outflow_s )  THEN
[106]1138       nysv = nys + 1
1139    ELSE
1140       nysv = nys
1141    ENDIF
[1]1142
[114]1143!
1144!-- Allocate wall flag arrays used in the multigrid solver
1145    IF ( psolver == 'multigrid' )  THEN
1146
1147       DO  i = maximum_grid_level, 1, -1
1148
1149           SELECT CASE ( i )
1150
1151              CASE ( 1 )
1152                 ALLOCATE( wall_flags_1(nzb:nzt_mg(i)+1,         &
1153                                        nys_mg(i)-1:nyn_mg(i)+1, &
1154                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1155
1156              CASE ( 2 )
1157                 ALLOCATE( wall_flags_2(nzb:nzt_mg(i)+1,         &
1158                                        nys_mg(i)-1:nyn_mg(i)+1, &
1159                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1160
1161              CASE ( 3 )
1162                 ALLOCATE( wall_flags_3(nzb:nzt_mg(i)+1,         &
1163                                        nys_mg(i)-1:nyn_mg(i)+1, &
1164                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1165
1166              CASE ( 4 )
1167                 ALLOCATE( wall_flags_4(nzb:nzt_mg(i)+1,         &
1168                                        nys_mg(i)-1:nyn_mg(i)+1, &
1169                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1170
1171              CASE ( 5 )
1172                 ALLOCATE( wall_flags_5(nzb:nzt_mg(i)+1,         &
1173                                        nys_mg(i)-1:nyn_mg(i)+1, &
1174                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1175
1176              CASE ( 6 )
1177                 ALLOCATE( wall_flags_6(nzb:nzt_mg(i)+1,         &
1178                                        nys_mg(i)-1:nyn_mg(i)+1, &
1179                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1180
1181              CASE ( 7 )
1182                 ALLOCATE( wall_flags_7(nzb:nzt_mg(i)+1,         &
1183                                        nys_mg(i)-1:nyn_mg(i)+1, &
1184                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1185
1186              CASE ( 8 )
1187                 ALLOCATE( wall_flags_8(nzb:nzt_mg(i)+1,         &
1188                                        nys_mg(i)-1:nyn_mg(i)+1, &
1189                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1190
1191              CASE ( 9 )
1192                 ALLOCATE( wall_flags_9(nzb:nzt_mg(i)+1,         &
1193                                        nys_mg(i)-1:nyn_mg(i)+1, &
1194                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1195
1196              CASE ( 10 )
1197                 ALLOCATE( wall_flags_10(nzb:nzt_mg(i)+1,        &
1198                                        nys_mg(i)-1:nyn_mg(i)+1, &
1199                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1200
1201              CASE DEFAULT
[254]1202                 message_string = 'more than 10 multigrid levels'
1203                 CALL message( 'init_pegrid', 'PA0238', 1, 2, 0, 6, 0 )
[114]1204
1205          END SELECT
1206
1207       ENDDO
1208
1209    ENDIF
1210
[759]1211!
1212!-- Calculate the number of groups into which parallel I/O is split.
1213!-- The default for files which are opened by all PEs (or where each
1214!-- PE opens his own independent file) is, that all PEs are doing input/output
1215!-- in parallel at the same time. This might cause performance or even more
1216!-- severe problems depending on the configuration of the underlying file
1217!-- system.
1218!-- First, set the default:
1219    IF ( maximum_parallel_io_streams == -1  .OR. &
1220         maximum_parallel_io_streams > numprocs )  THEN
1221       maximum_parallel_io_streams = numprocs
1222    ENDIF
1223
1224!
1225!-- Now calculate the number of io_blocks and the io_group to which the
1226!-- respective PE belongs. I/O of the groups is done in serial, but in parallel
1227!-- for all PEs belonging to the same group. A preliminary setting with myid
1228!-- based on MPI_COMM_WORLD has been done in parin.
1229    io_blocks = numprocs / maximum_parallel_io_streams
1230    io_group  = MOD( myid+1, io_blocks )
1231   
1232
[1]1233 END SUBROUTINE init_pegrid
Note: See TracBrowser for help on using the repository browser.