source: palm/trunk/SOURCE/init_pegrid.f90 @ 1304

Last change on this file since 1304 was 1304, checked in by raasch, 11 years ago

openmp bugfix + bugfix for single core MPI runs
ulimit option in mrun changed from -Ss to -s

  • Property svn:keywords set to Id
File size: 40.9 KB
RevLine 
[1]1 SUBROUTINE init_pegrid
[1036]2
3!--------------------------------------------------------------------------------!
4! This file is part of PALM.
5!
6! PALM is free software: you can redistribute it and/or modify it under the terms
7! of the GNU General Public License as published by the Free Software Foundation,
8! either version 3 of the License, or (at your option) any later version.
9!
10! PALM is distributed in the hope that it will be useful, but WITHOUT ANY
11! WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
12! A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
13!
14! You should have received a copy of the GNU General Public License along with
15! PALM. If not, see <http://www.gnu.org/licenses/>.
16!
17! Copyright 1997-2012  Leibniz University Hannover
18!--------------------------------------------------------------------------------!
19!
[254]20! Current revisions:
[1]21! -----------------
[760]22!
[1213]23!
[668]24! Former revisions:
25! -----------------
26! $Id: init_pegrid.f90 1304 2014-03-12 10:29:42Z raasch $
27!
[1213]28! 1212 2013-08-15 08:46:27Z raasch
29! error message for poisfft_hybrid removed
30!
[1160]31! 1159 2013-05-21 11:58:22Z fricke
32! dirichlet/neumann and neumann/dirichlet removed
33!
[1140]34! 1139 2013-04-18 07:25:03Z raasch
35! bugfix for calculating the id of the PE carrying the recycling plane
36!
[1112]37! 1111 2013-03-08 23:54:10Z raasch
38! initialization of poisfft moved to module poisfft
39!
[1093]40! 1092 2013-02-02 11:24:22Z raasch
41! unused variables removed
42!
[1057]43! 1056 2012-11-16 15:28:04Z raasch
44! Indices for arrays n.._mg start from zero due to definition of arrays f2 and
45! p2 as automatic arrays in recursive subroutine next_mg_level
46!
[1042]47! 1041 2012-11-06 02:36:29Z raasch
48! a 2d virtual processor topology is used by default for all machines
49!
[1037]50! 1036 2012-10-22 13:43:42Z raasch
51! code put under GPL (PALM 3.9)
52!
[1004]53! 1003 2012-09-14 14:35:53Z raasch
54! subdomains must have identical size (grid matching = "match" removed)
55!
[1002]56! 1001 2012-09-13 14:08:46Z raasch
57! all actions concerning upstream-spline-method removed
58!
[979]59! 978 2012-08-09 08:28:32Z fricke
60! dirichlet/neumann and neumann/dirichlet added
61! nxlu and nysv are also calculated for inflow boundary
62!
[810]63! 809 2012-01-30 13:32:58Z maronga
64! Bugfix: replaced .AND. and .NOT. with && and ! in the preprocessor directives
65!
[808]66! 807 2012-01-25 11:53:51Z maronga
67! New cpp directive "__check" implemented which is used by check_namelist_files
68!
[781]69! 780 2011-11-10 07:16:47Z raasch
70! Bugfix for rev 778: Misplaced error message moved to the rigth place
71!
[779]72! 778 2011-11-07 14:18:25Z fricke
73! Calculation of subdomain_size now considers the number of ghost points.
74! Further coarsening on PE0 is now possible for multigrid solver if the
75! collected field has more grid points than the subdomain of an PE.
76!
[760]77! 759 2011-09-15 13:58:31Z raasch
78! calculation of number of io_blocks and the io_group to which the respective
79! PE belongs
80!
[756]81! 755 2011-08-29 09:55:16Z witha
82! 2d-decomposition is default for lcflow (ForWind cluster in Oldenburg)
83!
[723]84! 722 2011-04-11 06:21:09Z raasch
85! Bugfix: bc_lr/ns_cyc/dirrad/raddir replaced by bc_lr/ns, because variables
86!         are not yet set here; grid_level set to 0
87!
[710]88! 709 2011-03-30 09:31:40Z raasch
89! formatting adjustments
90!
[708]91! 707 2011-03-29 11:39:40Z raasch
92! bc_lr/ns replaced by bc_lr/ns_cyc/dirrad/raddir
93!
[668]94! 667 2010-12-23 12:06:00Z suehring/gryschka
[667]95! Moved determination of target_id's from init_coupling
[669]96! Determination of parameters needed for coupling (coupling_topology, ngp_a,
97! ngp_o) with different grid/processor-topology in ocean and atmosphere
[667]98! Adaption of ngp_xy, ngp_y to a dynamic number of ghost points.
99! The maximum_grid_level changed from 1 to 0. 0 is the normal grid, 1 to
100! maximum_grid_level the grids for multigrid, in which 0 and 1 are normal grids.
101! This distinction is due to reasons of data exchange and performance for the
102! normal grid and grids in poismg.
103! The definition of MPI-Vectors adapted to a dynamic numer of ghost points.
104! New MPI-Vectors for data exchange between left and right boundaries added.
105! This is due to reasons of performance (10% faster).
[77]106!
[647]107! 646 2010-12-15 13:03:52Z raasch
108! lctit is now using a 2d decomposition by default
109!
[623]110! 622 2010-12-10 08:08:13Z raasch
111! optional barriers included in order to speed up collective operations
112!
[482]113! 438 2010-02-01 04:32:43Z raasch
114! 2d-decomposition is default for Cray-XT machines
[77]115!
[392]116! 274 2009-03-26 15:11:21Z heinze
117! Output of messages replaced by message handling routine.
118!
[226]119! 206 2008-10-13 14:59:11Z raasch
120! Implementation of a MPI-1 coupling: added __parallel within the __mpi2 part
121! 2d-decomposition is default on SGI-ICE systems
122!
[198]123! 197 2008-09-16 15:29:03Z raasch
124! multigrid levels are limited by subdomains if mg_switch_to_pe0_level = -1,
125! nz is used instead nnz for calculating mg-levels
126! Collect on PE0 horizontal index bounds from all other PEs,
127! broadcast the id of the inflow PE (using the respective communicator)
128!
[139]129! 114 2007-10-10 00:03:15Z raasch
130! Allocation of wall flag arrays for multigrid solver
131!
[110]132! 108 2007-08-24 15:10:38Z letzel
133! Intercommunicator (comm_inter) and derived data type (type_xy) for
134! coupled model runs created, assign coupling_mode_remote,
135! indices nxlu and nysv are calculated (needed for non-cyclic boundary
136! conditions)
137!
[83]138! 82 2007-04-16 15:40:52Z raasch
139! Cpp-directive lcmuk changed to intel_openmp_bug, setting of host on lcmuk by
140! cpp-directive removed
141!
[77]142! 75 2007-03-22 09:54:05Z raasch
[73]143! uxrp, vynp eliminated,
[75]144! dirichlet/neumann changed to dirichlet/radiation, etc.,
145! poisfft_init is only called if fft-solver is switched on
[1]146!
[3]147! RCS Log replace by Id keyword, revision history cleaned up
148!
[1]149! Revision 1.28  2006/04/26 13:23:32  raasch
150! lcmuk does not understand the !$ comment so a cpp-directive is required
151!
152! Revision 1.1  1997/07/24 11:15:09  raasch
153! Initial revision
154!
155!
156! Description:
157! ------------
158! Determination of the virtual processor topology (if not prescribed by the
159! user)and computation of the grid point number and array bounds of the local
160! domains.
161!------------------------------------------------------------------------------!
162
163    USE control_parameters
[163]164    USE grid_variables
[1]165    USE indices
166    USE pegrid
167    USE statistics
168    USE transpose_indices
169
170
[667]171
[1]172    IMPLICIT NONE
173
[778]174    INTEGER ::  i, id_inflow_l, id_recycling_l, ind(5), j, k,                &
[151]175                maximum_grid_level_l, mg_switch_to_pe0_level_l, mg_levels_x, &
176                mg_levels_y, mg_levels_z, nnx_y, nnx_z, nny_x, nny_z, nnz_x, &
[1092]177                nnz_y, numproc_sqr, nxl_l, nxr_l, nyn_l, nys_l,    &
[778]178                nzb_l, nzt_l, omp_get_num_threads
[1]179
180    INTEGER, DIMENSION(:), ALLOCATABLE ::  ind_all, nxlf, nxrf, nynf, nysf
181
[667]182    INTEGER, DIMENSION(2) :: pdims_remote
183
[1092]184#if defined( __mpi2 )
[1]185    LOGICAL ::  found
[1092]186#endif
[1]187
188!
189!-- Get the number of OpenMP threads
190    !$OMP PARALLEL
[82]191#if defined( __intel_openmp_bug )
[1]192    threads_per_task = omp_get_num_threads()
193#else
194!$  threads_per_task = omp_get_num_threads()
195#endif
196    !$OMP END PARALLEL
197
198
199#if defined( __parallel )
[667]200
[1]201!
202!-- Determine the processor topology or check it, if prescribed by the user
203    IF ( npex == -1  .AND.  npey == -1 )  THEN
204
205!
206!--    Automatic determination of the topology
[1041]207       numproc_sqr = SQRT( REAL( numprocs ) )
208       pdims(1)    = MAX( numproc_sqr , 1 )
209       DO  WHILE ( MOD( numprocs , pdims(1) ) /= 0 )
210          pdims(1) = pdims(1) - 1
211       ENDDO
212       pdims(2) = numprocs / pdims(1)
[1]213
214    ELSEIF ( npex /= -1  .AND.  npey /= -1 )  THEN
215
216!
217!--    Prescribed by user. Number of processors on the prescribed topology
218!--    must be equal to the number of PEs available to the job
219       IF ( ( npex * npey ) /= numprocs )  THEN
[274]220          WRITE( message_string, * ) 'number of PEs of the prescribed ',      & 
221                 'topology (', npex*npey,') does not match & the number of ', & 
222                 'PEs available to the job (', numprocs, ')'
[254]223          CALL message( 'init_pegrid', 'PA0221', 1, 2, 0, 6, 0 )
[1]224       ENDIF
225       pdims(1) = npex
226       pdims(2) = npey
227
228    ELSE
229!
230!--    If the processor topology is prescribed by the user, the number of
231!--    PEs must be given in both directions
[274]232       message_string = 'if the processor topology is prescribed by the, ' //  &
233                   ' user& both values of "npex" and "npey" must be given ' // &
234                   'in the &NAMELIST-parameter file'
[254]235       CALL message( 'init_pegrid', 'PA0222', 1, 2, 0, 6, 0 )
[1]236
237    ENDIF
238
239!
[622]240!-- For communication speedup, set barriers in front of collective
241!-- communications by default on SGI-type systems
242    IF ( host(3:5) == 'sgi' )  collective_wait = .TRUE.
243
244!
[1]245!-- If necessary, set horizontal boundary conditions to non-cyclic
[722]246    IF ( bc_lr /= 'cyclic' )  cyclic(1) = .FALSE.
247    IF ( bc_ns /= 'cyclic' )  cyclic(2) = .FALSE.
[1]248
[807]249
[809]250#if ! defined( __check)
[1]251!
252!-- Create the virtual processor grid
253    CALL MPI_CART_CREATE( comm_palm, ndim, pdims, cyclic, reorder, &
254                          comm2d, ierr )
255    CALL MPI_COMM_RANK( comm2d, myid, ierr )
256    WRITE (myid_char,'(''_'',I4.4)')  myid
257
258    CALL MPI_CART_COORDS( comm2d, myid, ndim, pcoord, ierr )
259    CALL MPI_CART_SHIFT( comm2d, 0, 1, pleft, pright, ierr )
260    CALL MPI_CART_SHIFT( comm2d, 1, 1, psouth, pnorth, ierr )
261
262!
263!-- Determine sub-topologies for transpositions
264!-- Transposition from z to x:
265    remain_dims(1) = .TRUE.
266    remain_dims(2) = .FALSE.
267    CALL MPI_CART_SUB( comm2d, remain_dims, comm1dx, ierr )
268    CALL MPI_COMM_RANK( comm1dx, myidx, ierr )
269!
270!-- Transposition from x to y
271    remain_dims(1) = .FALSE.
272    remain_dims(2) = .TRUE.
273    CALL MPI_CART_SUB( comm2d, remain_dims, comm1dy, ierr )
274    CALL MPI_COMM_RANK( comm1dy, myidy, ierr )
275
[807]276#endif
[1]277
278!
[1003]279!-- Calculate array bounds along x-direction for every PE.
[1]280    ALLOCATE( nxlf(0:pdims(1)-1), nxrf(0:pdims(1)-1), nynf(0:pdims(2)-1), &
[1003]281              nysf(0:pdims(2)-1) )
[1]282
[1003]283    IF ( MOD( nx+1 , pdims(1) ) /= 0 )  THEN
[274]284       WRITE( message_string, * ) 'x-direction: gridpoint number (',nx+1,') ',&
285                               'is not an& integral divisor of the number ',  &
286                               'processors (', pdims(1),')'
[254]287       CALL message( 'init_pegrid', 'PA0225', 1, 2, 0, 6, 0 )
[1]288    ELSE
[1003]289       nnx  = ( nx + 1 ) / pdims(1)
[1]290       IF ( nnx*pdims(1) - ( nx + 1) > nnx )  THEN
[274]291          WRITE( message_string, * ) 'x-direction: nx does not match the',    & 
292                       'requirements given by the number of PEs &used',       &
293                       '& please use nx = ', nx - ( pdims(1) - ( nnx*pdims(1) &
294                                   - ( nx + 1 ) ) ), ' instead of nx =', nx
[254]295          CALL message( 'init_pegrid', 'PA0226', 1, 2, 0, 6, 0 )
[1]296       ENDIF
297    ENDIF   
298
299!
300!-- Left and right array bounds, number of gridpoints
301    DO  i = 0, pdims(1)-1
302       nxlf(i)   = i * nnx
303       nxrf(i)   = ( i + 1 ) * nnx - 1
304    ENDDO
305
306!
307!-- Calculate array bounds in y-direction for every PE.
[1003]308    IF ( MOD( ny+1 , pdims(2) ) /= 0 )  THEN
[274]309       WRITE( message_string, * ) 'y-direction: gridpoint number (',ny+1,') ', &
310                           'is not an& integral divisor of the number of',     &
311                           'processors (', pdims(2),')'
[254]312       CALL message( 'init_pegrid', 'PA0227', 1, 2, 0, 6, 0 )
[1]313    ELSE
[1003]314       nny  = ( ny + 1 ) / pdims(2)
[1]315       IF ( nny*pdims(2) - ( ny + 1) > nny )  THEN
[274]316          WRITE( message_string, * ) 'y-direction: ny does not match the',    &
317                       'requirements given by the number of PEs &used ',      &
318                       '& please use ny = ', ny - ( pdims(2) - ( nnx*pdims(2) &
[254]319                                     - ( ny + 1 ) ) ), ' instead of ny =', ny
320          CALL message( 'init_pegrid', 'PA0228', 1, 2, 0, 6, 0 )
[1]321       ENDIF
322    ENDIF   
323
324!
325!-- South and north array bounds
326    DO  j = 0, pdims(2)-1
327       nysf(j)   = j * nny
328       nynf(j)   = ( j + 1 ) * nny - 1
329    ENDDO
330
331!
332!-- Local array bounds of the respective PEs
[1003]333    nxl = nxlf(pcoord(1))
334    nxr = nxrf(pcoord(1))
335    nys = nysf(pcoord(2))
336    nyn = nynf(pcoord(2))
337    nzb = 0
338    nzt = nz
339    nnz = nz
[1]340
341!
[707]342!-- Set switches to define if the PE is situated at the border of the virtual
343!-- processor grid
344    IF ( nxl == 0 )   left_border_pe  = .TRUE.
345    IF ( nxr == nx )  right_border_pe = .TRUE.
346    IF ( nys == 0 )   south_border_pe = .TRUE.
347    IF ( nyn == ny )  north_border_pe = .TRUE.
348
349!
[1]350!-- Calculate array bounds and gridpoint numbers for the transposed arrays
351!-- (needed in the pressure solver)
352!-- For the transposed arrays, cyclic boundaries as well as top and bottom
353!-- boundaries are omitted, because they are obstructive to the transposition
354
355!
356!-- 1. transposition  z --> x
[1001]357!-- This transposition is not neccessary in case of a 1d-decomposition along x
[1304]358    nys_x = nys
359    nyn_x = nyn
360    nny_x = nny
361    nnz_x = nz / pdims(1)
362    nzb_x = 1 + myidx * nnz_x
363    nzt_x = ( myidx + 1 ) * nnz_x
364    sendrecvcount_zx = nnx * nny * nnz_x
365
[1001]366    IF ( pdims(2) /= 1 )  THEN
[1003]367       IF ( MOD( nz , pdims(1) ) /= 0 )  THEN
[274]368          WRITE( message_string, * ) 'transposition z --> x:',                &
369                       '&nz=',nz,' is not an integral divisior of pdims(1)=', &
370                                                                   pdims(1)
[254]371          CALL message( 'init_pegrid', 'PA0230', 1, 2, 0, 6, 0 )
[1]372       ENDIF
373    ENDIF
374
375!
376!-- 2. transposition  x --> y
[1003]377    nnz_y = nnz_x
378    nzb_y = nzb_x
379    nzt_y = nzt_x
380    IF ( MOD( nx+1 , pdims(2) ) /= 0 )  THEN
[274]381       WRITE( message_string, * ) 'transposition x --> y:',                &
382                         '&nx+1=',nx+1,' is not an integral divisor of ',&
383                         'pdims(2)=',pdims(2)
[254]384       CALL message( 'init_pegrid', 'PA0231', 1, 2, 0, 6, 0 )
[1]385    ENDIF
[1003]386    nnx_y = (nx+1) / pdims(2)
[1]387    nxl_y = myidy * nnx_y
[1003]388    nxr_y = ( myidy + 1 ) * nnx_y - 1
[1]389    sendrecvcount_xy = nnx_y * nny_x * nnz_y
390
391!
392!-- 3. transposition  y --> z  (ELSE:  x --> y  in case of 1D-decomposition
393!-- along x)
[1304]394    nnx_z = nnx_y
395    nxl_z = nxl_y
396    nxr_z = nxr_y
397    nny_z = (ny+1) / pdims(1)
398    nys_z = myidx * nny_z
399    nyn_z = ( myidx + 1 ) * nny_z - 1
400    sendrecvcount_yz = nnx_y * nny_z * nnz_y
401
[1001]402    IF ( pdims(2) /= 1 )  THEN
[1]403!
404!--    y --> z
405!--    This transposition is not neccessary in case of a 1d-decomposition
406!--    along x, except that the uptream-spline method is switched on
[1003]407       IF ( MOD( ny+1 , pdims(1) ) /= 0 )  THEN
[274]408          WRITE( message_string, * ) 'transposition y --> z:',            &
409                            '& ny+1=',ny+1,' is not an integral divisor of ',&
410                            'pdims(1)=',pdims(1)
[254]411          CALL message( 'init_pegrid', 'PA0232', 1, 2, 0, 6, 0 )
[1]412       ENDIF
413
414    ELSE
415!
416!--    x --> y. This condition must be fulfilled for a 1D-decomposition along x
[1003]417       IF ( MOD( ny+1 , pdims(1) ) /= 0 )  THEN
[274]418          WRITE( message_string, * ) 'transposition x --> y:',               &
419                            '& ny+1=',ny+1,' is not an integral divisor of ',&
420                            'pdims(1)=',pdims(1)
[254]421          CALL message( 'init_pegrid', 'PA0233', 1, 2, 0, 6, 0 )
[1]422       ENDIF
423
424    ENDIF
425
426!
427!-- Indices for direct transpositions z --> y (used for calculating spectra)
428    IF ( dt_dosp /= 9999999.9 )  THEN
[1003]429       IF ( MOD( nz, pdims(2) ) /= 0 )  THEN
[274]430          WRITE( message_string, * ) 'direct transposition z --> y (needed ', &
431                    'for spectra):& nz=',nz,' is not an integral divisor of ',&
432                    'pdims(2)=',pdims(2)
[254]433          CALL message( 'init_pegrid', 'PA0234', 1, 2, 0, 6, 0 )
[1]434       ELSE
[1003]435          nxl_yd = nxl
436          nxr_yd = nxr
437          nzb_yd = 1 + myidy * ( nz / pdims(2) )
438          nzt_yd = ( myidy + 1 ) * ( nz / pdims(2) )
439          sendrecvcount_zyd = nnx * nny * ( nz / pdims(2) )
[1]440       ENDIF
441    ENDIF
442
443!
444!-- Indices for direct transpositions y --> x (they are only possible in case
445!-- of a 1d-decomposition along x)
446    IF ( pdims(2) == 1 )  THEN
[1003]447       nny_x = nny / pdims(1)
448       nys_x = myid * nny_x
449       nyn_x = ( myid + 1 ) * nny_x - 1
450       nzb_x = 1
451       nzt_x = nz
452       sendrecvcount_xy = nnx * nny_x * nz
[1]453    ENDIF
454
455!
456!-- Indices for direct transpositions x --> y (they are only possible in case
457!-- of a 1d-decomposition along y)
458    IF ( pdims(1) == 1 )  THEN
[1003]459       nnx_y = nnx / pdims(2)
460       nxl_y = myid * nnx_y
461       nxr_y = ( myid + 1 ) * nnx_y - 1
462       nzb_y = 1
463       nzt_y = nz
464       sendrecvcount_xy = nnx_y * nny * nz
[1]465    ENDIF
466
467!
468!-- Arrays for storing the array bounds are needed any more
469    DEALLOCATE( nxlf , nxrf , nynf , nysf )
470
[807]471
[809]472#if ! defined( __check)
[145]473!
474!-- Collect index bounds from other PEs (to be written to restart file later)
475    ALLOCATE( hor_index_bounds(4,0:numprocs-1) )
476
477    IF ( myid == 0 )  THEN
478
479       hor_index_bounds(1,0) = nxl
480       hor_index_bounds(2,0) = nxr
481       hor_index_bounds(3,0) = nys
482       hor_index_bounds(4,0) = nyn
483
484!
485!--    Receive data from all other PEs
486       DO  i = 1, numprocs-1
487          CALL MPI_RECV( ibuf, 4, MPI_INTEGER, i, MPI_ANY_TAG, comm2d, status, &
488                         ierr )
489          hor_index_bounds(:,i) = ibuf(1:4)
490       ENDDO
491
492    ELSE
493!
494!--    Send index bounds to PE0
495       ibuf(1) = nxl
496       ibuf(2) = nxr
497       ibuf(3) = nys
498       ibuf(4) = nyn
499       CALL MPI_SEND( ibuf, 4, MPI_INTEGER, 0, myid, comm2d, ierr )
500
501    ENDIF
502
[807]503#endif
504
[1]505#if defined( __print )
506!
507!-- Control output
508    IF ( myid == 0 )  THEN
509       PRINT*, '*** processor topology ***'
510       PRINT*, ' '
511       PRINT*, 'myid   pcoord    left right  south north  idx idy   nxl: nxr',&
512               &'   nys: nyn'
513       PRINT*, '------------------------------------------------------------',&
514               &'-----------'
515       WRITE (*,1000)  0, pcoord(1), pcoord(2), pleft, pright, psouth, pnorth, &
516                       myidx, myidy, nxl, nxr, nys, nyn
5171000   FORMAT (I4,2X,'(',I3,',',I3,')',3X,I4,2X,I4,3X,I4,2X,I4,2X,I3,1X,I3, &
518               2(2X,I4,':',I4))
519
520!
[108]521!--    Receive data from the other PEs
[1]522       DO  i = 1,numprocs-1
523          CALL MPI_RECV( ibuf, 12, MPI_INTEGER, i, MPI_ANY_TAG, comm2d, status, &
524                         ierr )
525          WRITE (*,1000)  i, ( ibuf(j) , j = 1,12 )
526       ENDDO
527    ELSE
528
529!
530!--    Send data to PE0
531       ibuf(1) = pcoord(1); ibuf(2) = pcoord(2); ibuf(3) = pleft
532       ibuf(4) = pright; ibuf(5) = psouth; ibuf(6) = pnorth; ibuf(7) = myidx
533       ibuf(8) = myidy; ibuf(9) = nxl; ibuf(10) = nxr; ibuf(11) = nys
534       ibuf(12) = nyn
535       CALL MPI_SEND( ibuf, 12, MPI_INTEGER, 0, myid, comm2d, ierr )       
536    ENDIF
537#endif
538
[809]539#if defined( __parallel ) && ! defined( __check)
[102]540#if defined( __mpi2 )
541!
542!-- In case of coupled runs, get the port name on PE0 of the atmosphere model
543!-- and pass it to PE0 of the ocean model
544    IF ( myid == 0 )  THEN
545
546       IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN
547
548          CALL MPI_OPEN_PORT( MPI_INFO_NULL, port_name, ierr )
[108]549
[102]550          CALL MPI_PUBLISH_NAME( 'palm_coupler', MPI_INFO_NULL, port_name, &
551                                 ierr )
[108]552
553!
[104]554!--       Write a flag file for the ocean model and the other atmosphere
555!--       processes.
556!--       There seems to be a bug in MPICH2 which causes hanging processes
557!--       in case that execution of LOOKUP_NAME is continued too early
558!--       (i.e. before the port has been created)
559          OPEN( 90, FILE='COUPLING_PORT_OPENED', FORM='FORMATTED' )
560          WRITE ( 90, '(''TRUE'')' )
561          CLOSE ( 90 )
[102]562
563       ELSEIF ( coupling_mode == 'ocean_to_atmosphere' )  THEN
564
[104]565!
566!--       Continue only if the atmosphere model has created the port.
567!--       There seems to be a bug in MPICH2 which causes hanging processes
568!--       in case that execution of LOOKUP_NAME is continued too early
569!--       (i.e. before the port has been created)
570          INQUIRE( FILE='COUPLING_PORT_OPENED', EXIST=found )
571          DO WHILE ( .NOT. found )
572             INQUIRE( FILE='COUPLING_PORT_OPENED', EXIST=found )
573          ENDDO
574
[102]575          CALL MPI_LOOKUP_NAME( 'palm_coupler', MPI_INFO_NULL, port_name, ierr )
576
577       ENDIF
578
579    ENDIF
580
581!
582!-- In case of coupled runs, establish the connection between the atmosphere
583!-- and the ocean model and define the intercommunicator (comm_inter)
584    CALL MPI_BARRIER( comm2d, ierr )
585    IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN
586
587       CALL MPI_COMM_ACCEPT( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &
588                             comm_inter, ierr )
[108]589       coupling_mode_remote = 'ocean_to_atmosphere'
590
[102]591    ELSEIF ( coupling_mode == 'ocean_to_atmosphere' )  THEN
592
593       CALL MPI_COMM_CONNECT( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &
594                              comm_inter, ierr )
[108]595       coupling_mode_remote = 'atmosphere_to_ocean'
596
[102]597    ENDIF
[206]598#endif
[102]599
[667]600!
[709]601!-- Determine the number of ghost point layers
602    IF ( scalar_advec == 'ws-scheme' .OR. momentum_advec == 'ws-scheme' )  THEN
[667]603       nbgp = 3
604    ELSE
605       nbgp = 1
[709]606    ENDIF 
[667]607
[102]608!
[709]609!-- Create a new MPI derived datatype for the exchange of surface (xy) data,
610!-- which is needed for coupled atmosphere-ocean runs.
611!-- First, calculate number of grid points of an xy-plane.
[667]612    ngp_xy  = ( nxr - nxl + 1 + 2 * nbgp ) * ( nyn - nys + 1 + 2 * nbgp )
[102]613    CALL MPI_TYPE_VECTOR( ngp_xy, 1, nzt-nzb+2, MPI_REAL, type_xy, ierr )
614    CALL MPI_TYPE_COMMIT( type_xy, ierr )
[667]615
[709]616    IF ( TRIM( coupling_mode ) /= 'uncoupled' )  THEN
[667]617   
618!
619!--    Pass the number of grid points of the atmosphere model to
620!--    the ocean model and vice versa
621       IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN
622
623          nx_a = nx
624          ny_a = ny
625
[709]626          IF ( myid == 0 )  THEN
627
628             CALL MPI_SEND( nx_a, 1, MPI_INTEGER, numprocs, 1, comm_inter,  &
629                            ierr )
630             CALL MPI_SEND( ny_a, 1, MPI_INTEGER, numprocs, 2, comm_inter,  &
631                            ierr )
632             CALL MPI_SEND( pdims, 2, MPI_INTEGER, numprocs, 3, comm_inter, &
633                            ierr )
634             CALL MPI_RECV( nx_o, 1, MPI_INTEGER, numprocs, 4, comm_inter,  &
635                            status, ierr )
636             CALL MPI_RECV( ny_o, 1, MPI_INTEGER, numprocs, 5, comm_inter,  &
637                            status, ierr )
638             CALL MPI_RECV( pdims_remote, 2, MPI_INTEGER, numprocs, 6,      &
[667]639                            comm_inter, status, ierr )
640          ENDIF
641
[709]642          CALL MPI_BCAST( nx_o, 1, MPI_INTEGER, 0, comm2d, ierr )
643          CALL MPI_BCAST( ny_o, 1, MPI_INTEGER, 0, comm2d, ierr ) 
644          CALL MPI_BCAST( pdims_remote, 2, MPI_INTEGER, 0, comm2d, ierr )
[667]645       
646       ELSEIF ( coupling_mode == 'ocean_to_atmosphere' )  THEN
647
648          nx_o = nx
649          ny_o = ny 
650
651          IF ( myid == 0 ) THEN
[709]652
653             CALL MPI_RECV( nx_a, 1, MPI_INTEGER, 0, 1, comm_inter, status, &
654                            ierr )
655             CALL MPI_RECV( ny_a, 1, MPI_INTEGER, 0, 2, comm_inter, status, &
656                            ierr )
657             CALL MPI_RECV( pdims_remote, 2, MPI_INTEGER, 0, 3, comm_inter, &
658                            status, ierr )
659             CALL MPI_SEND( nx_o, 1, MPI_INTEGER, 0, 4, comm_inter, ierr )
660             CALL MPI_SEND( ny_o, 1, MPI_INTEGER, 0, 5, comm_inter, ierr )
661             CALL MPI_SEND( pdims, 2, MPI_INTEGER, 0, 6, comm_inter, ierr )
[667]662          ENDIF
663
664          CALL MPI_BCAST( nx_a, 1, MPI_INTEGER, 0, comm2d, ierr)
665          CALL MPI_BCAST( ny_a, 1, MPI_INTEGER, 0, comm2d, ierr) 
666          CALL MPI_BCAST( pdims_remote, 2, MPI_INTEGER, 0, comm2d, ierr) 
667
668       ENDIF
669 
[709]670       ngp_a = ( nx_a+1 + 2 * nbgp ) * ( ny_a+1 + 2 * nbgp )
671       ngp_o = ( nx_o+1 + 2 * nbgp ) * ( ny_o+1 + 2 * nbgp )
[667]672
673!
[709]674!--    Determine if the horizontal grid and the number of PEs in ocean and
675!--    atmosphere is same or not
676       IF ( nx_o == nx_a  .AND.  ny_o == ny_a  .AND.  &
[667]677            pdims(1) == pdims_remote(1) .AND. pdims(2) == pdims_remote(2) ) &
678       THEN
679          coupling_topology = 0
680       ELSE
681          coupling_topology = 1
682       ENDIF 
683
684!
685!--    Determine the target PEs for the exchange between ocean and
686!--    atmosphere (comm2d)
[709]687       IF ( coupling_topology == 0 )  THEN
688!
689!--       In case of identical topologies, every atmosphere PE has exactly one
690!--       ocean PE counterpart and vice versa
691          IF ( TRIM( coupling_mode ) == 'atmosphere_to_ocean' ) THEN
[667]692             target_id = myid + numprocs
693          ELSE
694             target_id = myid 
695          ENDIF
696
697       ELSE
698!
699!--       In case of nonequivalent topology in ocean and atmosphere only for
700!--       PE0 in ocean and PE0 in atmosphere a target_id is needed, since
[709]701!--       data echxchange between ocean and atmosphere will be done only
702!--       between these PEs.   
703          IF ( myid == 0 )  THEN
704
705             IF ( TRIM( coupling_mode ) == 'atmosphere_to_ocean' )  THEN
[667]706                target_id = numprocs 
707             ELSE
708                target_id = 0
709             ENDIF
[709]710
[667]711          ENDIF
[709]712
[667]713       ENDIF
714
715    ENDIF
716
717
[102]718#endif
719
[1]720#else
721
722!
723!-- Array bounds when running on a single PE (respectively a non-parallel
724!-- machine)
[1003]725    nxl = 0
726    nxr = nx
727    nnx = nxr - nxl + 1
728    nys = 0
729    nyn = ny
730    nny = nyn - nys + 1
731    nzb = 0
732    nzt = nz
733    nnz = nz
[1]734
[145]735    ALLOCATE( hor_index_bounds(4,0:0) )
736    hor_index_bounds(1,0) = nxl
737    hor_index_bounds(2,0) = nxr
738    hor_index_bounds(3,0) = nys
739    hor_index_bounds(4,0) = nyn
740
[1]741!
742!-- Array bounds for the pressure solver (in the parallel code, these bounds
743!-- are the ones for the transposed arrays)
[1003]744    nys_x = nys
745    nyn_x = nyn
746    nzb_x = nzb + 1
747    nzt_x = nzt
[1]748
[1003]749    nxl_y = nxl
750    nxr_y = nxr
751    nzb_y = nzb + 1
752    nzt_y = nzt
[1]753
[1003]754    nxl_z = nxl
755    nxr_z = nxr
756    nys_z = nys
757    nyn_z = nyn
[1]758
759#endif
760
761!
762!-- Calculate number of grid levels necessary for the multigrid poisson solver
763!-- as well as the gridpoint indices on each level
764    IF ( psolver == 'multigrid' )  THEN
765
766!
767!--    First calculate number of possible grid levels for the subdomains
768       mg_levels_x = 1
769       mg_levels_y = 1
770       mg_levels_z = 1
771
772       i = nnx
773       DO WHILE ( MOD( i, 2 ) == 0  .AND.  i /= 2 )
774          i = i / 2
775          mg_levels_x = mg_levels_x + 1
776       ENDDO
777
778       j = nny
779       DO WHILE ( MOD( j, 2 ) == 0  .AND.  j /= 2 )
780          j = j / 2
781          mg_levels_y = mg_levels_y + 1
782       ENDDO
783
[181]784       k = nz    ! do not use nnz because it might be > nz due to transposition
785                 ! requirements
[1]786       DO WHILE ( MOD( k, 2 ) == 0  .AND.  k /= 2 )
787          k = k / 2
788          mg_levels_z = mg_levels_z + 1
789       ENDDO
790
791       maximum_grid_level = MIN( mg_levels_x, mg_levels_y, mg_levels_z )
792
793!
794!--    Find out, if the total domain allows more levels. These additional
[709]795!--    levels are identically processed on all PEs.
[197]796       IF ( numprocs > 1  .AND.  mg_switch_to_pe0_level /= -1 )  THEN
[709]797
[1]798          IF ( mg_levels_z > MIN( mg_levels_x, mg_levels_y ) )  THEN
[709]799
[1]800             mg_switch_to_pe0_level_l = maximum_grid_level
801
802             mg_levels_x = 1
803             mg_levels_y = 1
804
805             i = nx+1
806             DO WHILE ( MOD( i, 2 ) == 0  .AND.  i /= 2 )
807                i = i / 2
808                mg_levels_x = mg_levels_x + 1
809             ENDDO
810
811             j = ny+1
812             DO WHILE ( MOD( j, 2 ) == 0  .AND.  j /= 2 )
813                j = j / 2
814                mg_levels_y = mg_levels_y + 1
815             ENDDO
816
817             maximum_grid_level_l = MIN( mg_levels_x, mg_levels_y, mg_levels_z )
818
819             IF ( maximum_grid_level_l > mg_switch_to_pe0_level_l )  THEN
820                mg_switch_to_pe0_level_l = maximum_grid_level_l - &
821                                           mg_switch_to_pe0_level_l + 1
822             ELSE
823                mg_switch_to_pe0_level_l = 0
824             ENDIF
[709]825
[1]826          ELSE
827             mg_switch_to_pe0_level_l = 0
828             maximum_grid_level_l = maximum_grid_level
[709]829
[1]830          ENDIF
831
832!
833!--       Use switch level calculated above only if it is not pre-defined
834!--       by user
835          IF ( mg_switch_to_pe0_level == 0 )  THEN
836             IF ( mg_switch_to_pe0_level_l /= 0 )  THEN
837                mg_switch_to_pe0_level = mg_switch_to_pe0_level_l
838                maximum_grid_level     = maximum_grid_level_l
839             ENDIF
840
841          ELSE
842!
843!--          Check pre-defined value and reset to default, if neccessary
844             IF ( mg_switch_to_pe0_level < mg_switch_to_pe0_level_l  .OR.  &
845                  mg_switch_to_pe0_level >= maximum_grid_level_l )  THEN
[254]846                message_string = 'mg_switch_to_pe0_level ' // &
847                                 'out of range and reset to default (=0)'
848                CALL message( 'init_pegrid', 'PA0235', 0, 1, 0, 6, 0 )
[1]849                mg_switch_to_pe0_level = 0
850             ELSE
851!
852!--             Use the largest number of possible levels anyway and recalculate
853!--             the switch level to this largest number of possible values
854                maximum_grid_level = maximum_grid_level_l
855
856             ENDIF
[709]857
[1]858          ENDIF
859
860       ENDIF
861
[1056]862       ALLOCATE( grid_level_count(maximum_grid_level),                       &
863                 nxl_mg(0:maximum_grid_level), nxr_mg(0:maximum_grid_level), &
864                 nyn_mg(0:maximum_grid_level), nys_mg(0:maximum_grid_level), &
865                 nzt_mg(0:maximum_grid_level) )
[1]866
867       grid_level_count = 0
[1056]868!
869!--    Index zero required as dummy due to definition of arrays f2 and p2 in
870!--    recursive subroutine next_mg_level
871       nxl_mg(0) = 0; nxr_mg(0) = 0; nyn_mg(0) = 0; nys_mg(0) = 0; nzt_mg(0) = 0
[778]872
[1]873       nxl_l = nxl; nxr_l = nxr; nys_l = nys; nyn_l = nyn; nzt_l = nzt
874
875       DO  i = maximum_grid_level, 1 , -1
876
877          IF ( i == mg_switch_to_pe0_level )  THEN
[809]878#if defined( __parallel ) && ! defined( __check )
[1]879!
880!--          Save the grid size of the subdomain at the switch level, because
881!--          it is needed in poismg.
882             ind(1) = nxl_l; ind(2) = nxr_l
883             ind(3) = nys_l; ind(4) = nyn_l
884             ind(5) = nzt_l
885             ALLOCATE( ind_all(5*numprocs), mg_loc_ind(5,0:numprocs-1) )
886             CALL MPI_ALLGATHER( ind, 5, MPI_INTEGER, ind_all, 5, &
887                                 MPI_INTEGER, comm2d, ierr )
888             DO  j = 0, numprocs-1
889                DO  k = 1, 5
890                   mg_loc_ind(k,j) = ind_all(k+j*5)
891                ENDDO
892             ENDDO
893             DEALLOCATE( ind_all )
894!
[709]895!--          Calculate the grid size of the total domain
[1]896             nxr_l = ( nxr_l-nxl_l+1 ) * pdims(1) - 1
897             nxl_l = 0
898             nyn_l = ( nyn_l-nys_l+1 ) * pdims(2) - 1
899             nys_l = 0
900!
901!--          The size of this gathered array must not be larger than the
902!--          array tend, which is used in the multigrid scheme as a temporary
[778]903!--          array. Therefore the subdomain size of an PE is calculated and
904!--          the size of the gathered grid. These values are used in 
905!--          routines pres and poismg
906             subdomain_size = ( nxr - nxl + 2 * nbgp + 1 ) * &
907                              ( nyn - nys + 2 * nbgp + 1 ) * ( nzt - nzb + 2 )
[1]908             gathered_size  = ( nxr_l - nxl_l + 3 ) * ( nyn_l - nys_l + 3 ) * &
909                              ( nzt_l - nzb + 2 )
910
[809]911#elif ! defined ( __parallel )
[254]912             message_string = 'multigrid gather/scatter impossible ' // &
[1]913                          'in non parallel mode'
[254]914             CALL message( 'init_pegrid', 'PA0237', 1, 2, 0, 6, 0 )
[1]915#endif
916          ENDIF
917
918          nxl_mg(i) = nxl_l
919          nxr_mg(i) = nxr_l
920          nys_mg(i) = nys_l
921          nyn_mg(i) = nyn_l
922          nzt_mg(i) = nzt_l
923
924          nxl_l = nxl_l / 2 
925          nxr_l = nxr_l / 2
926          nys_l = nys_l / 2 
927          nyn_l = nyn_l / 2 
928          nzt_l = nzt_l / 2 
[778]929
[1]930       ENDDO
931
[780]932!
933!--    Temporary problem: Currently calculation of maxerror iin routine poismg crashes
934!--    if grid data are collected on PE0 already on the finest grid level.
935!--    To be solved later.
936       IF ( maximum_grid_level == mg_switch_to_pe0_level )  THEN
937          message_string = 'grid coarsening on subdomain level cannot be performed'
938          CALL message( 'poismg', 'PA0236', 1, 2, 0, 6, 0 )
939       ENDIF
940
[1]941    ELSE
942
[667]943       maximum_grid_level = 0
[1]944
945    ENDIF
946
[722]947!
948!-- Default level 0 tells exchange_horiz that all ghost planes have to be
949!-- exchanged. grid_level is adjusted in poismg, where only one ghost plane
950!-- is required.
951    grid_level = 0
[1]952
[809]953#if defined( __parallel ) && ! defined ( __check )
[1]954!
955!-- Gridpoint number for the exchange of ghost points (y-line for 2D-arrays)
[667]956    ngp_y  = nyn - nys + 1 + 2 * nbgp
[1]957
958!
[709]959!-- Define new MPI derived datatypes for the exchange of ghost points in
960!-- x- and y-direction for 2D-arrays (line)
961    CALL MPI_TYPE_VECTOR( nxr-nxl+1+2*nbgp, nbgp, ngp_y, MPI_REAL, type_x, &
962                          ierr )
[1]963    CALL MPI_TYPE_COMMIT( type_x, ierr )
[709]964    CALL MPI_TYPE_VECTOR( nxr-nxl+1+2*nbgp, nbgp, ngp_y, MPI_INTEGER, &
965                          type_x_int, ierr )
[1]966    CALL MPI_TYPE_COMMIT( type_x_int, ierr )
967
[667]968    CALL MPI_TYPE_VECTOR( nbgp, ngp_y, ngp_y, MPI_REAL, type_y, ierr )
969    CALL MPI_TYPE_COMMIT( type_y, ierr )
970    CALL MPI_TYPE_VECTOR( nbgp, ngp_y, ngp_y, MPI_INTEGER, type_y_int, ierr )
971    CALL MPI_TYPE_COMMIT( type_y_int, ierr )
972
973
[1]974!
975!-- Calculate gridpoint numbers for the exchange of ghost points along x
976!-- (yz-plane for 3D-arrays) and define MPI derived data type(s) for the
977!-- exchange of ghost points in y-direction (xz-plane).
978!-- Do these calculations for the model grid and (if necessary) also
979!-- for the coarser grid levels used in the multigrid method
[667]980    ALLOCATE ( ngp_yz(0:maximum_grid_level), type_xz(0:maximum_grid_level),&
981               type_yz(0:maximum_grid_level) )
[1]982
983    nxl_l = nxl; nxr_l = nxr; nys_l = nys; nyn_l = nyn; nzb_l = nzb; nzt_l = nzt
[709]984
[667]985!
986!-- Discern between the model grid, which needs nbgp ghost points and
987!-- grid levels for the multigrid scheme. In the latter case only one
988!-- ghost point is necessary.
[709]989!-- First definition of MPI-datatypes for exchange of ghost layers on normal
[667]990!-- grid. The following loop is needed for data exchange in poismg.f90.
991!
992!-- Determine number of grid points of yz-layer for exchange
993    ngp_yz(0) = (nzt - nzb + 2) * (nyn - nys + 1 + 2 * nbgp)
[709]994
[667]995!
[709]996!-- Define an MPI-datatype for the exchange of left/right boundaries.
997!-- Although data are contiguous in physical memory (which does not
998!-- necessarily require an MPI-derived datatype), the data exchange between
999!-- left and right PE's using the MPI-derived type is 10% faster than without.
[667]1000    CALL MPI_TYPE_VECTOR( nxr-nxl+1+2*nbgp, nbgp*(nzt-nzb+2), ngp_yz(0), &
[709]1001                          MPI_REAL, type_xz(0), ierr )
[667]1002    CALL MPI_TYPE_COMMIT( type_xz(0), ierr )
[1]1003
[709]1004    CALL MPI_TYPE_VECTOR( nbgp, ngp_yz(0), ngp_yz(0), MPI_REAL, type_yz(0), &
1005                          ierr ) 
[667]1006    CALL MPI_TYPE_COMMIT( type_yz(0), ierr )
[709]1007
[667]1008!
[709]1009!-- Definition of MPI-datatypes for multigrid method (coarser level grids)
[667]1010    IF ( psolver == 'multigrid' )  THEN
1011!   
[709]1012!--    Definition of MPI-datatyoe as above, but only 1 ghost level is used
1013       DO  i = maximum_grid_level, 1 , -1
1014
[667]1015          ngp_yz(i) = (nzt_l - nzb_l + 2) * (nyn_l - nys_l + 3)
1016
1017          CALL MPI_TYPE_VECTOR( nxr_l-nxl_l+3, nzt_l-nzb_l+2, ngp_yz(i), &
[709]1018                                MPI_REAL, type_xz(i), ierr )
[667]1019          CALL MPI_TYPE_COMMIT( type_xz(i), ierr )
[1]1020
[709]1021          CALL MPI_TYPE_VECTOR( 1, ngp_yz(i), ngp_yz(i), MPI_REAL, type_yz(i), &
1022                                ierr )
[667]1023          CALL MPI_TYPE_COMMIT( type_yz(i), ierr )
1024
1025          nxl_l = nxl_l / 2
1026          nxr_l = nxr_l / 2
1027          nys_l = nys_l / 2
1028          nyn_l = nyn_l / 2
1029          nzt_l = nzt_l / 2
[709]1030
[667]1031       ENDDO
[709]1032
1033    ENDIF
[1]1034#endif
1035
[809]1036#if defined( __parallel ) && ! defined ( __check )
[1]1037!
1038!-- Setting of flags for inflow/outflow conditions in case of non-cyclic
[106]1039!-- horizontal boundary conditions.
[1]1040    IF ( pleft == MPI_PROC_NULL )  THEN
[1159]1041       IF ( bc_lr == 'dirichlet/radiation' )  THEN
[1]1042          inflow_l  = .TRUE.
[1159]1043       ELSEIF ( bc_lr == 'radiation/dirichlet' )  THEN
[1]1044          outflow_l = .TRUE.
1045       ENDIF
1046    ENDIF
1047
1048    IF ( pright == MPI_PROC_NULL )  THEN
[1159]1049       IF ( bc_lr == 'dirichlet/radiation' )  THEN
[1]1050          outflow_r = .TRUE.
[1159]1051       ELSEIF ( bc_lr == 'radiation/dirichlet' )  THEN
[1]1052          inflow_r  = .TRUE.
1053       ENDIF
1054    ENDIF
1055
1056    IF ( psouth == MPI_PROC_NULL )  THEN
[1159]1057       IF ( bc_ns == 'dirichlet/radiation' )  THEN
[1]1058          outflow_s = .TRUE.
[1159]1059       ELSEIF ( bc_ns == 'radiation/dirichlet' )  THEN
[1]1060          inflow_s  = .TRUE.
1061       ENDIF
1062    ENDIF
1063
1064    IF ( pnorth == MPI_PROC_NULL )  THEN
[1159]1065       IF ( bc_ns == 'dirichlet/radiation' )  THEN
[1]1066          inflow_n  = .TRUE.
[1159]1067       ELSEIF ( bc_ns == 'radiation/dirichlet' )  THEN
[1]1068          outflow_n = .TRUE.
1069       ENDIF
1070    ENDIF
1071
[151]1072!
1073!-- Broadcast the id of the inflow PE
1074    IF ( inflow_l )  THEN
[163]1075       id_inflow_l = myidx
[151]1076    ELSE
1077       id_inflow_l = 0
1078    ENDIF
[622]1079    IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
[151]1080    CALL MPI_ALLREDUCE( id_inflow_l, id_inflow, 1, MPI_INTEGER, MPI_SUM, &
1081                        comm1dx, ierr )
1082
[163]1083!
1084!-- Broadcast the id of the recycling plane
1085!-- WARNING: needs to be adjusted in case of inflows other than from left side!
[1139]1086    IF ( NINT( recycling_width / dx ) >= nxl  .AND. &
1087         NINT( recycling_width / dx ) <= nxr )  THEN
[163]1088       id_recycling_l = myidx
1089    ELSE
1090       id_recycling_l = 0
1091    ENDIF
[622]1092    IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
[163]1093    CALL MPI_ALLREDUCE( id_recycling_l, id_recycling, 1, MPI_INTEGER, MPI_SUM, &
1094                        comm1dx, ierr )
1095
[809]1096#elif ! defined ( __parallel )
[1159]1097    IF ( bc_lr == 'dirichlet/radiation' )  THEN
[1]1098       inflow_l  = .TRUE.
1099       outflow_r = .TRUE.
[1159]1100    ELSEIF ( bc_lr == 'radiation/dirichlet' )  THEN
[1]1101       outflow_l = .TRUE.
1102       inflow_r  = .TRUE.
1103    ENDIF
1104
[1159]1105    IF ( bc_ns == 'dirichlet/radiation' )  THEN
[1]1106       inflow_n  = .TRUE.
1107       outflow_s = .TRUE.
[1159]1108    ELSEIF ( bc_ns == 'radiation/dirichlet' )  THEN
[1]1109       outflow_n = .TRUE.
1110       inflow_s  = .TRUE.
1111    ENDIF
1112#endif
[807]1113
[106]1114!
[978]1115!-- At the inflow or outflow, u or v, respectively, have to be calculated for
1116!-- one more grid point.
1117    IF ( inflow_l .OR. outflow_l )  THEN
[106]1118       nxlu = nxl + 1
1119    ELSE
1120       nxlu = nxl
1121    ENDIF
[978]1122    IF ( inflow_s .OR. outflow_s )  THEN
[106]1123       nysv = nys + 1
1124    ELSE
1125       nysv = nys
1126    ENDIF
[1]1127
[114]1128!
1129!-- Allocate wall flag arrays used in the multigrid solver
1130    IF ( psolver == 'multigrid' )  THEN
1131
1132       DO  i = maximum_grid_level, 1, -1
1133
1134           SELECT CASE ( i )
1135
1136              CASE ( 1 )
1137                 ALLOCATE( wall_flags_1(nzb:nzt_mg(i)+1,         &
1138                                        nys_mg(i)-1:nyn_mg(i)+1, &
1139                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1140
1141              CASE ( 2 )
1142                 ALLOCATE( wall_flags_2(nzb:nzt_mg(i)+1,         &
1143                                        nys_mg(i)-1:nyn_mg(i)+1, &
1144                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1145
1146              CASE ( 3 )
1147                 ALLOCATE( wall_flags_3(nzb:nzt_mg(i)+1,         &
1148                                        nys_mg(i)-1:nyn_mg(i)+1, &
1149                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1150
1151              CASE ( 4 )
1152                 ALLOCATE( wall_flags_4(nzb:nzt_mg(i)+1,         &
1153                                        nys_mg(i)-1:nyn_mg(i)+1, &
1154                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1155
1156              CASE ( 5 )
1157                 ALLOCATE( wall_flags_5(nzb:nzt_mg(i)+1,         &
1158                                        nys_mg(i)-1:nyn_mg(i)+1, &
1159                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1160
1161              CASE ( 6 )
1162                 ALLOCATE( wall_flags_6(nzb:nzt_mg(i)+1,         &
1163                                        nys_mg(i)-1:nyn_mg(i)+1, &
1164                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1165
1166              CASE ( 7 )
1167                 ALLOCATE( wall_flags_7(nzb:nzt_mg(i)+1,         &
1168                                        nys_mg(i)-1:nyn_mg(i)+1, &
1169                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1170
1171              CASE ( 8 )
1172                 ALLOCATE( wall_flags_8(nzb:nzt_mg(i)+1,         &
1173                                        nys_mg(i)-1:nyn_mg(i)+1, &
1174                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1175
1176              CASE ( 9 )
1177                 ALLOCATE( wall_flags_9(nzb:nzt_mg(i)+1,         &
1178                                        nys_mg(i)-1:nyn_mg(i)+1, &
1179                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1180
1181              CASE ( 10 )
1182                 ALLOCATE( wall_flags_10(nzb:nzt_mg(i)+1,        &
1183                                        nys_mg(i)-1:nyn_mg(i)+1, &
1184                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1185
1186              CASE DEFAULT
[254]1187                 message_string = 'more than 10 multigrid levels'
1188                 CALL message( 'init_pegrid', 'PA0238', 1, 2, 0, 6, 0 )
[114]1189
1190          END SELECT
1191
1192       ENDDO
1193
1194    ENDIF
1195
[759]1196!
1197!-- Calculate the number of groups into which parallel I/O is split.
1198!-- The default for files which are opened by all PEs (or where each
1199!-- PE opens his own independent file) is, that all PEs are doing input/output
1200!-- in parallel at the same time. This might cause performance or even more
1201!-- severe problems depending on the configuration of the underlying file
1202!-- system.
1203!-- First, set the default:
1204    IF ( maximum_parallel_io_streams == -1  .OR. &
1205         maximum_parallel_io_streams > numprocs )  THEN
1206       maximum_parallel_io_streams = numprocs
1207    ENDIF
1208
1209!
1210!-- Now calculate the number of io_blocks and the io_group to which the
1211!-- respective PE belongs. I/O of the groups is done in serial, but in parallel
1212!-- for all PEs belonging to the same group. A preliminary setting with myid
1213!-- based on MPI_COMM_WORLD has been done in parin.
1214    io_blocks = numprocs / maximum_parallel_io_streams
1215    io_group  = MOD( myid+1, io_blocks )
1216   
1217
[1]1218 END SUBROUTINE init_pegrid
Note: See TracBrowser for help on using the repository browser.