source: palm/trunk/SOURCE/init_pegrid.f90 @ 1313

Last change on this file since 1313 was 1310, checked in by raasch, 11 years ago

update of GPL copyright

  • Property svn:keywords set to Id
File size: 41.0 KB
RevLine 
[1]1 SUBROUTINE init_pegrid
[1036]2
3!--------------------------------------------------------------------------------!
4! This file is part of PALM.
5!
6! PALM is free software: you can redistribute it and/or modify it under the terms
7! of the GNU General Public License as published by the Free Software Foundation,
8! either version 3 of the License, or (at your option) any later version.
9!
10! PALM is distributed in the hope that it will be useful, but WITHOUT ANY
11! WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR
12! A PARTICULAR PURPOSE.  See the GNU General Public License for more details.
13!
14! You should have received a copy of the GNU General Public License along with
15! PALM. If not, see <http://www.gnu.org/licenses/>.
16!
[1310]17! Copyright 1997-2014 Leibniz Universitaet Hannover
[1036]18!--------------------------------------------------------------------------------!
19!
[254]20! Current revisions:
[1]21! -----------------
[760]22!
[1213]23!
[668]24! Former revisions:
25! -----------------
26! $Id: init_pegrid.f90 1310 2014-03-14 08:01:56Z fricke $
27!
[1305]28! 1304 2014-03-12 10:29:42Z raasch
29! bugfix: single core MPI runs missed some settings of transpose indices
30!
[1213]31! 1212 2013-08-15 08:46:27Z raasch
32! error message for poisfft_hybrid removed
33!
[1160]34! 1159 2013-05-21 11:58:22Z fricke
35! dirichlet/neumann and neumann/dirichlet removed
36!
[1140]37! 1139 2013-04-18 07:25:03Z raasch
38! bugfix for calculating the id of the PE carrying the recycling plane
39!
[1112]40! 1111 2013-03-08 23:54:10Z raasch
41! initialization of poisfft moved to module poisfft
42!
[1093]43! 1092 2013-02-02 11:24:22Z raasch
44! unused variables removed
45!
[1057]46! 1056 2012-11-16 15:28:04Z raasch
47! Indices for arrays n.._mg start from zero due to definition of arrays f2 and
48! p2 as automatic arrays in recursive subroutine next_mg_level
49!
[1042]50! 1041 2012-11-06 02:36:29Z raasch
51! a 2d virtual processor topology is used by default for all machines
52!
[1037]53! 1036 2012-10-22 13:43:42Z raasch
54! code put under GPL (PALM 3.9)
55!
[1004]56! 1003 2012-09-14 14:35:53Z raasch
57! subdomains must have identical size (grid matching = "match" removed)
58!
[1002]59! 1001 2012-09-13 14:08:46Z raasch
60! all actions concerning upstream-spline-method removed
61!
[979]62! 978 2012-08-09 08:28:32Z fricke
63! dirichlet/neumann and neumann/dirichlet added
64! nxlu and nysv are also calculated for inflow boundary
65!
[810]66! 809 2012-01-30 13:32:58Z maronga
67! Bugfix: replaced .AND. and .NOT. with && and ! in the preprocessor directives
68!
[808]69! 807 2012-01-25 11:53:51Z maronga
70! New cpp directive "__check" implemented which is used by check_namelist_files
71!
[781]72! 780 2011-11-10 07:16:47Z raasch
73! Bugfix for rev 778: Misplaced error message moved to the rigth place
74!
[779]75! 778 2011-11-07 14:18:25Z fricke
76! Calculation of subdomain_size now considers the number of ghost points.
77! Further coarsening on PE0 is now possible for multigrid solver if the
78! collected field has more grid points than the subdomain of an PE.
79!
[760]80! 759 2011-09-15 13:58:31Z raasch
81! calculation of number of io_blocks and the io_group to which the respective
82! PE belongs
83!
[756]84! 755 2011-08-29 09:55:16Z witha
85! 2d-decomposition is default for lcflow (ForWind cluster in Oldenburg)
86!
[723]87! 722 2011-04-11 06:21:09Z raasch
88! Bugfix: bc_lr/ns_cyc/dirrad/raddir replaced by bc_lr/ns, because variables
89!         are not yet set here; grid_level set to 0
90!
[710]91! 709 2011-03-30 09:31:40Z raasch
92! formatting adjustments
93!
[708]94! 707 2011-03-29 11:39:40Z raasch
95! bc_lr/ns replaced by bc_lr/ns_cyc/dirrad/raddir
96!
[668]97! 667 2010-12-23 12:06:00Z suehring/gryschka
[667]98! Moved determination of target_id's from init_coupling
[669]99! Determination of parameters needed for coupling (coupling_topology, ngp_a,
100! ngp_o) with different grid/processor-topology in ocean and atmosphere
[667]101! Adaption of ngp_xy, ngp_y to a dynamic number of ghost points.
102! The maximum_grid_level changed from 1 to 0. 0 is the normal grid, 1 to
103! maximum_grid_level the grids for multigrid, in which 0 and 1 are normal grids.
104! This distinction is due to reasons of data exchange and performance for the
105! normal grid and grids in poismg.
106! The definition of MPI-Vectors adapted to a dynamic numer of ghost points.
107! New MPI-Vectors for data exchange between left and right boundaries added.
108! This is due to reasons of performance (10% faster).
[77]109!
[647]110! 646 2010-12-15 13:03:52Z raasch
111! lctit is now using a 2d decomposition by default
112!
[623]113! 622 2010-12-10 08:08:13Z raasch
114! optional barriers included in order to speed up collective operations
115!
[482]116! 438 2010-02-01 04:32:43Z raasch
117! 2d-decomposition is default for Cray-XT machines
[77]118!
[392]119! 274 2009-03-26 15:11:21Z heinze
120! Output of messages replaced by message handling routine.
121!
[226]122! 206 2008-10-13 14:59:11Z raasch
123! Implementation of a MPI-1 coupling: added __parallel within the __mpi2 part
124! 2d-decomposition is default on SGI-ICE systems
125!
[198]126! 197 2008-09-16 15:29:03Z raasch
127! multigrid levels are limited by subdomains if mg_switch_to_pe0_level = -1,
128! nz is used instead nnz for calculating mg-levels
129! Collect on PE0 horizontal index bounds from all other PEs,
130! broadcast the id of the inflow PE (using the respective communicator)
131!
[139]132! 114 2007-10-10 00:03:15Z raasch
133! Allocation of wall flag arrays for multigrid solver
134!
[110]135! 108 2007-08-24 15:10:38Z letzel
136! Intercommunicator (comm_inter) and derived data type (type_xy) for
137! coupled model runs created, assign coupling_mode_remote,
138! indices nxlu and nysv are calculated (needed for non-cyclic boundary
139! conditions)
140!
[83]141! 82 2007-04-16 15:40:52Z raasch
142! Cpp-directive lcmuk changed to intel_openmp_bug, setting of host on lcmuk by
143! cpp-directive removed
144!
[77]145! 75 2007-03-22 09:54:05Z raasch
[73]146! uxrp, vynp eliminated,
[75]147! dirichlet/neumann changed to dirichlet/radiation, etc.,
148! poisfft_init is only called if fft-solver is switched on
[1]149!
[3]150! RCS Log replace by Id keyword, revision history cleaned up
151!
[1]152! Revision 1.28  2006/04/26 13:23:32  raasch
153! lcmuk does not understand the !$ comment so a cpp-directive is required
154!
155! Revision 1.1  1997/07/24 11:15:09  raasch
156! Initial revision
157!
158!
159! Description:
160! ------------
161! Determination of the virtual processor topology (if not prescribed by the
162! user)and computation of the grid point number and array bounds of the local
163! domains.
164!------------------------------------------------------------------------------!
165
166    USE control_parameters
[163]167    USE grid_variables
[1]168    USE indices
169    USE pegrid
170    USE statistics
171    USE transpose_indices
172
173
[667]174
[1]175    IMPLICIT NONE
176
[778]177    INTEGER ::  i, id_inflow_l, id_recycling_l, ind(5), j, k,                &
[151]178                maximum_grid_level_l, mg_switch_to_pe0_level_l, mg_levels_x, &
179                mg_levels_y, mg_levels_z, nnx_y, nnx_z, nny_x, nny_z, nnz_x, &
[1092]180                nnz_y, numproc_sqr, nxl_l, nxr_l, nyn_l, nys_l,    &
[778]181                nzb_l, nzt_l, omp_get_num_threads
[1]182
183    INTEGER, DIMENSION(:), ALLOCATABLE ::  ind_all, nxlf, nxrf, nynf, nysf
184
[667]185    INTEGER, DIMENSION(2) :: pdims_remote
186
[1092]187#if defined( __mpi2 )
[1]188    LOGICAL ::  found
[1092]189#endif
[1]190
191!
192!-- Get the number of OpenMP threads
193    !$OMP PARALLEL
[82]194#if defined( __intel_openmp_bug )
[1]195    threads_per_task = omp_get_num_threads()
196#else
197!$  threads_per_task = omp_get_num_threads()
198#endif
199    !$OMP END PARALLEL
200
201
202#if defined( __parallel )
[667]203
[1]204!
205!-- Determine the processor topology or check it, if prescribed by the user
206    IF ( npex == -1  .AND.  npey == -1 )  THEN
207
208!
209!--    Automatic determination of the topology
[1041]210       numproc_sqr = SQRT( REAL( numprocs ) )
211       pdims(1)    = MAX( numproc_sqr , 1 )
212       DO  WHILE ( MOD( numprocs , pdims(1) ) /= 0 )
213          pdims(1) = pdims(1) - 1
214       ENDDO
215       pdims(2) = numprocs / pdims(1)
[1]216
217    ELSEIF ( npex /= -1  .AND.  npey /= -1 )  THEN
218
219!
220!--    Prescribed by user. Number of processors on the prescribed topology
221!--    must be equal to the number of PEs available to the job
222       IF ( ( npex * npey ) /= numprocs )  THEN
[274]223          WRITE( message_string, * ) 'number of PEs of the prescribed ',      & 
224                 'topology (', npex*npey,') does not match & the number of ', & 
225                 'PEs available to the job (', numprocs, ')'
[254]226          CALL message( 'init_pegrid', 'PA0221', 1, 2, 0, 6, 0 )
[1]227       ENDIF
228       pdims(1) = npex
229       pdims(2) = npey
230
231    ELSE
232!
233!--    If the processor topology is prescribed by the user, the number of
234!--    PEs must be given in both directions
[274]235       message_string = 'if the processor topology is prescribed by the, ' //  &
236                   ' user& both values of "npex" and "npey" must be given ' // &
237                   'in the &NAMELIST-parameter file'
[254]238       CALL message( 'init_pegrid', 'PA0222', 1, 2, 0, 6, 0 )
[1]239
240    ENDIF
241
242!
[622]243!-- For communication speedup, set barriers in front of collective
244!-- communications by default on SGI-type systems
245    IF ( host(3:5) == 'sgi' )  collective_wait = .TRUE.
246
247!
[1]248!-- If necessary, set horizontal boundary conditions to non-cyclic
[722]249    IF ( bc_lr /= 'cyclic' )  cyclic(1) = .FALSE.
250    IF ( bc_ns /= 'cyclic' )  cyclic(2) = .FALSE.
[1]251
[807]252
[809]253#if ! defined( __check)
[1]254!
255!-- Create the virtual processor grid
256    CALL MPI_CART_CREATE( comm_palm, ndim, pdims, cyclic, reorder, &
257                          comm2d, ierr )
258    CALL MPI_COMM_RANK( comm2d, myid, ierr )
259    WRITE (myid_char,'(''_'',I4.4)')  myid
260
261    CALL MPI_CART_COORDS( comm2d, myid, ndim, pcoord, ierr )
262    CALL MPI_CART_SHIFT( comm2d, 0, 1, pleft, pright, ierr )
263    CALL MPI_CART_SHIFT( comm2d, 1, 1, psouth, pnorth, ierr )
264
265!
266!-- Determine sub-topologies for transpositions
267!-- Transposition from z to x:
268    remain_dims(1) = .TRUE.
269    remain_dims(2) = .FALSE.
270    CALL MPI_CART_SUB( comm2d, remain_dims, comm1dx, ierr )
271    CALL MPI_COMM_RANK( comm1dx, myidx, ierr )
272!
273!-- Transposition from x to y
274    remain_dims(1) = .FALSE.
275    remain_dims(2) = .TRUE.
276    CALL MPI_CART_SUB( comm2d, remain_dims, comm1dy, ierr )
277    CALL MPI_COMM_RANK( comm1dy, myidy, ierr )
278
[807]279#endif
[1]280
281!
[1003]282!-- Calculate array bounds along x-direction for every PE.
[1]283    ALLOCATE( nxlf(0:pdims(1)-1), nxrf(0:pdims(1)-1), nynf(0:pdims(2)-1), &
[1003]284              nysf(0:pdims(2)-1) )
[1]285
[1003]286    IF ( MOD( nx+1 , pdims(1) ) /= 0 )  THEN
[274]287       WRITE( message_string, * ) 'x-direction: gridpoint number (',nx+1,') ',&
288                               'is not an& integral divisor of the number ',  &
289                               'processors (', pdims(1),')'
[254]290       CALL message( 'init_pegrid', 'PA0225', 1, 2, 0, 6, 0 )
[1]291    ELSE
[1003]292       nnx  = ( nx + 1 ) / pdims(1)
[1]293       IF ( nnx*pdims(1) - ( nx + 1) > nnx )  THEN
[274]294          WRITE( message_string, * ) 'x-direction: nx does not match the',    & 
295                       'requirements given by the number of PEs &used',       &
296                       '& please use nx = ', nx - ( pdims(1) - ( nnx*pdims(1) &
297                                   - ( nx + 1 ) ) ), ' instead of nx =', nx
[254]298          CALL message( 'init_pegrid', 'PA0226', 1, 2, 0, 6, 0 )
[1]299       ENDIF
300    ENDIF   
301
302!
303!-- Left and right array bounds, number of gridpoints
304    DO  i = 0, pdims(1)-1
305       nxlf(i)   = i * nnx
306       nxrf(i)   = ( i + 1 ) * nnx - 1
307    ENDDO
308
309!
310!-- Calculate array bounds in y-direction for every PE.
[1003]311    IF ( MOD( ny+1 , pdims(2) ) /= 0 )  THEN
[274]312       WRITE( message_string, * ) 'y-direction: gridpoint number (',ny+1,') ', &
313                           'is not an& integral divisor of the number of',     &
314                           'processors (', pdims(2),')'
[254]315       CALL message( 'init_pegrid', 'PA0227', 1, 2, 0, 6, 0 )
[1]316    ELSE
[1003]317       nny  = ( ny + 1 ) / pdims(2)
[1]318       IF ( nny*pdims(2) - ( ny + 1) > nny )  THEN
[274]319          WRITE( message_string, * ) 'y-direction: ny does not match the',    &
320                       'requirements given by the number of PEs &used ',      &
321                       '& please use ny = ', ny - ( pdims(2) - ( nnx*pdims(2) &
[254]322                                     - ( ny + 1 ) ) ), ' instead of ny =', ny
323          CALL message( 'init_pegrid', 'PA0228', 1, 2, 0, 6, 0 )
[1]324       ENDIF
325    ENDIF   
326
327!
328!-- South and north array bounds
329    DO  j = 0, pdims(2)-1
330       nysf(j)   = j * nny
331       nynf(j)   = ( j + 1 ) * nny - 1
332    ENDDO
333
334!
335!-- Local array bounds of the respective PEs
[1003]336    nxl = nxlf(pcoord(1))
337    nxr = nxrf(pcoord(1))
338    nys = nysf(pcoord(2))
339    nyn = nynf(pcoord(2))
340    nzb = 0
341    nzt = nz
342    nnz = nz
[1]343
344!
[707]345!-- Set switches to define if the PE is situated at the border of the virtual
346!-- processor grid
347    IF ( nxl == 0 )   left_border_pe  = .TRUE.
348    IF ( nxr == nx )  right_border_pe = .TRUE.
349    IF ( nys == 0 )   south_border_pe = .TRUE.
350    IF ( nyn == ny )  north_border_pe = .TRUE.
351
352!
[1]353!-- Calculate array bounds and gridpoint numbers for the transposed arrays
354!-- (needed in the pressure solver)
355!-- For the transposed arrays, cyclic boundaries as well as top and bottom
356!-- boundaries are omitted, because they are obstructive to the transposition
357
358!
359!-- 1. transposition  z --> x
[1001]360!-- This transposition is not neccessary in case of a 1d-decomposition along x
[1304]361    nys_x = nys
362    nyn_x = nyn
363    nny_x = nny
364    nnz_x = nz / pdims(1)
365    nzb_x = 1 + myidx * nnz_x
366    nzt_x = ( myidx + 1 ) * nnz_x
367    sendrecvcount_zx = nnx * nny * nnz_x
368
[1001]369    IF ( pdims(2) /= 1 )  THEN
[1003]370       IF ( MOD( nz , pdims(1) ) /= 0 )  THEN
[274]371          WRITE( message_string, * ) 'transposition z --> x:',                &
372                       '&nz=',nz,' is not an integral divisior of pdims(1)=', &
373                                                                   pdims(1)
[254]374          CALL message( 'init_pegrid', 'PA0230', 1, 2, 0, 6, 0 )
[1]375       ENDIF
376    ENDIF
377
378!
379!-- 2. transposition  x --> y
[1003]380    nnz_y = nnz_x
381    nzb_y = nzb_x
382    nzt_y = nzt_x
383    IF ( MOD( nx+1 , pdims(2) ) /= 0 )  THEN
[274]384       WRITE( message_string, * ) 'transposition x --> y:',                &
385                         '&nx+1=',nx+1,' is not an integral divisor of ',&
386                         'pdims(2)=',pdims(2)
[254]387       CALL message( 'init_pegrid', 'PA0231', 1, 2, 0, 6, 0 )
[1]388    ENDIF
[1003]389    nnx_y = (nx+1) / pdims(2)
[1]390    nxl_y = myidy * nnx_y
[1003]391    nxr_y = ( myidy + 1 ) * nnx_y - 1
[1]392    sendrecvcount_xy = nnx_y * nny_x * nnz_y
393
394!
395!-- 3. transposition  y --> z  (ELSE:  x --> y  in case of 1D-decomposition
396!-- along x)
[1304]397    nnx_z = nnx_y
398    nxl_z = nxl_y
399    nxr_z = nxr_y
400    nny_z = (ny+1) / pdims(1)
401    nys_z = myidx * nny_z
402    nyn_z = ( myidx + 1 ) * nny_z - 1
403    sendrecvcount_yz = nnx_y * nny_z * nnz_y
404
[1001]405    IF ( pdims(2) /= 1 )  THEN
[1]406!
407!--    y --> z
408!--    This transposition is not neccessary in case of a 1d-decomposition
409!--    along x, except that the uptream-spline method is switched on
[1003]410       IF ( MOD( ny+1 , pdims(1) ) /= 0 )  THEN
[274]411          WRITE( message_string, * ) 'transposition y --> z:',            &
412                            '& ny+1=',ny+1,' is not an integral divisor of ',&
413                            'pdims(1)=',pdims(1)
[254]414          CALL message( 'init_pegrid', 'PA0232', 1, 2, 0, 6, 0 )
[1]415       ENDIF
416
417    ELSE
418!
419!--    x --> y. This condition must be fulfilled for a 1D-decomposition along x
[1003]420       IF ( MOD( ny+1 , pdims(1) ) /= 0 )  THEN
[274]421          WRITE( message_string, * ) 'transposition x --> y:',               &
422                            '& ny+1=',ny+1,' is not an integral divisor of ',&
423                            'pdims(1)=',pdims(1)
[254]424          CALL message( 'init_pegrid', 'PA0233', 1, 2, 0, 6, 0 )
[1]425       ENDIF
426
427    ENDIF
428
429!
430!-- Indices for direct transpositions z --> y (used for calculating spectra)
431    IF ( dt_dosp /= 9999999.9 )  THEN
[1003]432       IF ( MOD( nz, pdims(2) ) /= 0 )  THEN
[274]433          WRITE( message_string, * ) 'direct transposition z --> y (needed ', &
434                    'for spectra):& nz=',nz,' is not an integral divisor of ',&
435                    'pdims(2)=',pdims(2)
[254]436          CALL message( 'init_pegrid', 'PA0234', 1, 2, 0, 6, 0 )
[1]437       ELSE
[1003]438          nxl_yd = nxl
439          nxr_yd = nxr
440          nzb_yd = 1 + myidy * ( nz / pdims(2) )
441          nzt_yd = ( myidy + 1 ) * ( nz / pdims(2) )
442          sendrecvcount_zyd = nnx * nny * ( nz / pdims(2) )
[1]443       ENDIF
444    ENDIF
445
446!
447!-- Indices for direct transpositions y --> x (they are only possible in case
448!-- of a 1d-decomposition along x)
449    IF ( pdims(2) == 1 )  THEN
[1003]450       nny_x = nny / pdims(1)
451       nys_x = myid * nny_x
452       nyn_x = ( myid + 1 ) * nny_x - 1
453       nzb_x = 1
454       nzt_x = nz
455       sendrecvcount_xy = nnx * nny_x * nz
[1]456    ENDIF
457
458!
459!-- Indices for direct transpositions x --> y (they are only possible in case
460!-- of a 1d-decomposition along y)
461    IF ( pdims(1) == 1 )  THEN
[1003]462       nnx_y = nnx / pdims(2)
463       nxl_y = myid * nnx_y
464       nxr_y = ( myid + 1 ) * nnx_y - 1
465       nzb_y = 1
466       nzt_y = nz
467       sendrecvcount_xy = nnx_y * nny * nz
[1]468    ENDIF
469
470!
471!-- Arrays for storing the array bounds are needed any more
472    DEALLOCATE( nxlf , nxrf , nynf , nysf )
473
[807]474
[809]475#if ! defined( __check)
[145]476!
477!-- Collect index bounds from other PEs (to be written to restart file later)
478    ALLOCATE( hor_index_bounds(4,0:numprocs-1) )
479
480    IF ( myid == 0 )  THEN
481
482       hor_index_bounds(1,0) = nxl
483       hor_index_bounds(2,0) = nxr
484       hor_index_bounds(3,0) = nys
485       hor_index_bounds(4,0) = nyn
486
487!
488!--    Receive data from all other PEs
489       DO  i = 1, numprocs-1
490          CALL MPI_RECV( ibuf, 4, MPI_INTEGER, i, MPI_ANY_TAG, comm2d, status, &
491                         ierr )
492          hor_index_bounds(:,i) = ibuf(1:4)
493       ENDDO
494
495    ELSE
496!
497!--    Send index bounds to PE0
498       ibuf(1) = nxl
499       ibuf(2) = nxr
500       ibuf(3) = nys
501       ibuf(4) = nyn
502       CALL MPI_SEND( ibuf, 4, MPI_INTEGER, 0, myid, comm2d, ierr )
503
504    ENDIF
505
[807]506#endif
507
[1]508#if defined( __print )
509!
510!-- Control output
511    IF ( myid == 0 )  THEN
512       PRINT*, '*** processor topology ***'
513       PRINT*, ' '
514       PRINT*, 'myid   pcoord    left right  south north  idx idy   nxl: nxr',&
515               &'   nys: nyn'
516       PRINT*, '------------------------------------------------------------',&
517               &'-----------'
518       WRITE (*,1000)  0, pcoord(1), pcoord(2), pleft, pright, psouth, pnorth, &
519                       myidx, myidy, nxl, nxr, nys, nyn
5201000   FORMAT (I4,2X,'(',I3,',',I3,')',3X,I4,2X,I4,3X,I4,2X,I4,2X,I3,1X,I3, &
521               2(2X,I4,':',I4))
522
523!
[108]524!--    Receive data from the other PEs
[1]525       DO  i = 1,numprocs-1
526          CALL MPI_RECV( ibuf, 12, MPI_INTEGER, i, MPI_ANY_TAG, comm2d, status, &
527                         ierr )
528          WRITE (*,1000)  i, ( ibuf(j) , j = 1,12 )
529       ENDDO
530    ELSE
531
532!
533!--    Send data to PE0
534       ibuf(1) = pcoord(1); ibuf(2) = pcoord(2); ibuf(3) = pleft
535       ibuf(4) = pright; ibuf(5) = psouth; ibuf(6) = pnorth; ibuf(7) = myidx
536       ibuf(8) = myidy; ibuf(9) = nxl; ibuf(10) = nxr; ibuf(11) = nys
537       ibuf(12) = nyn
538       CALL MPI_SEND( ibuf, 12, MPI_INTEGER, 0, myid, comm2d, ierr )       
539    ENDIF
540#endif
541
[809]542#if defined( __parallel ) && ! defined( __check)
[102]543#if defined( __mpi2 )
544!
545!-- In case of coupled runs, get the port name on PE0 of the atmosphere model
546!-- and pass it to PE0 of the ocean model
547    IF ( myid == 0 )  THEN
548
549       IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN
550
551          CALL MPI_OPEN_PORT( MPI_INFO_NULL, port_name, ierr )
[108]552
[102]553          CALL MPI_PUBLISH_NAME( 'palm_coupler', MPI_INFO_NULL, port_name, &
554                                 ierr )
[108]555
556!
[104]557!--       Write a flag file for the ocean model and the other atmosphere
558!--       processes.
559!--       There seems to be a bug in MPICH2 which causes hanging processes
560!--       in case that execution of LOOKUP_NAME is continued too early
561!--       (i.e. before the port has been created)
562          OPEN( 90, FILE='COUPLING_PORT_OPENED', FORM='FORMATTED' )
563          WRITE ( 90, '(''TRUE'')' )
564          CLOSE ( 90 )
[102]565
566       ELSEIF ( coupling_mode == 'ocean_to_atmosphere' )  THEN
567
[104]568!
569!--       Continue only if the atmosphere model has created the port.
570!--       There seems to be a bug in MPICH2 which causes hanging processes
571!--       in case that execution of LOOKUP_NAME is continued too early
572!--       (i.e. before the port has been created)
573          INQUIRE( FILE='COUPLING_PORT_OPENED', EXIST=found )
574          DO WHILE ( .NOT. found )
575             INQUIRE( FILE='COUPLING_PORT_OPENED', EXIST=found )
576          ENDDO
577
[102]578          CALL MPI_LOOKUP_NAME( 'palm_coupler', MPI_INFO_NULL, port_name, ierr )
579
580       ENDIF
581
582    ENDIF
583
584!
585!-- In case of coupled runs, establish the connection between the atmosphere
586!-- and the ocean model and define the intercommunicator (comm_inter)
587    CALL MPI_BARRIER( comm2d, ierr )
588    IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN
589
590       CALL MPI_COMM_ACCEPT( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &
591                             comm_inter, ierr )
[108]592       coupling_mode_remote = 'ocean_to_atmosphere'
593
[102]594    ELSEIF ( coupling_mode == 'ocean_to_atmosphere' )  THEN
595
596       CALL MPI_COMM_CONNECT( port_name, MPI_INFO_NULL, 0, MPI_COMM_WORLD, &
597                              comm_inter, ierr )
[108]598       coupling_mode_remote = 'atmosphere_to_ocean'
599
[102]600    ENDIF
[206]601#endif
[102]602
[667]603!
[709]604!-- Determine the number of ghost point layers
605    IF ( scalar_advec == 'ws-scheme' .OR. momentum_advec == 'ws-scheme' )  THEN
[667]606       nbgp = 3
607    ELSE
608       nbgp = 1
[709]609    ENDIF 
[667]610
[102]611!
[709]612!-- Create a new MPI derived datatype for the exchange of surface (xy) data,
613!-- which is needed for coupled atmosphere-ocean runs.
614!-- First, calculate number of grid points of an xy-plane.
[667]615    ngp_xy  = ( nxr - nxl + 1 + 2 * nbgp ) * ( nyn - nys + 1 + 2 * nbgp )
[102]616    CALL MPI_TYPE_VECTOR( ngp_xy, 1, nzt-nzb+2, MPI_REAL, type_xy, ierr )
617    CALL MPI_TYPE_COMMIT( type_xy, ierr )
[667]618
[709]619    IF ( TRIM( coupling_mode ) /= 'uncoupled' )  THEN
[667]620   
621!
622!--    Pass the number of grid points of the atmosphere model to
623!--    the ocean model and vice versa
624       IF ( coupling_mode == 'atmosphere_to_ocean' )  THEN
625
626          nx_a = nx
627          ny_a = ny
628
[709]629          IF ( myid == 0 )  THEN
630
631             CALL MPI_SEND( nx_a, 1, MPI_INTEGER, numprocs, 1, comm_inter,  &
632                            ierr )
633             CALL MPI_SEND( ny_a, 1, MPI_INTEGER, numprocs, 2, comm_inter,  &
634                            ierr )
635             CALL MPI_SEND( pdims, 2, MPI_INTEGER, numprocs, 3, comm_inter, &
636                            ierr )
637             CALL MPI_RECV( nx_o, 1, MPI_INTEGER, numprocs, 4, comm_inter,  &
638                            status, ierr )
639             CALL MPI_RECV( ny_o, 1, MPI_INTEGER, numprocs, 5, comm_inter,  &
640                            status, ierr )
641             CALL MPI_RECV( pdims_remote, 2, MPI_INTEGER, numprocs, 6,      &
[667]642                            comm_inter, status, ierr )
643          ENDIF
644
[709]645          CALL MPI_BCAST( nx_o, 1, MPI_INTEGER, 0, comm2d, ierr )
646          CALL MPI_BCAST( ny_o, 1, MPI_INTEGER, 0, comm2d, ierr ) 
647          CALL MPI_BCAST( pdims_remote, 2, MPI_INTEGER, 0, comm2d, ierr )
[667]648       
649       ELSEIF ( coupling_mode == 'ocean_to_atmosphere' )  THEN
650
651          nx_o = nx
652          ny_o = ny 
653
654          IF ( myid == 0 ) THEN
[709]655
656             CALL MPI_RECV( nx_a, 1, MPI_INTEGER, 0, 1, comm_inter, status, &
657                            ierr )
658             CALL MPI_RECV( ny_a, 1, MPI_INTEGER, 0, 2, comm_inter, status, &
659                            ierr )
660             CALL MPI_RECV( pdims_remote, 2, MPI_INTEGER, 0, 3, comm_inter, &
661                            status, ierr )
662             CALL MPI_SEND( nx_o, 1, MPI_INTEGER, 0, 4, comm_inter, ierr )
663             CALL MPI_SEND( ny_o, 1, MPI_INTEGER, 0, 5, comm_inter, ierr )
664             CALL MPI_SEND( pdims, 2, MPI_INTEGER, 0, 6, comm_inter, ierr )
[667]665          ENDIF
666
667          CALL MPI_BCAST( nx_a, 1, MPI_INTEGER, 0, comm2d, ierr)
668          CALL MPI_BCAST( ny_a, 1, MPI_INTEGER, 0, comm2d, ierr) 
669          CALL MPI_BCAST( pdims_remote, 2, MPI_INTEGER, 0, comm2d, ierr) 
670
671       ENDIF
672 
[709]673       ngp_a = ( nx_a+1 + 2 * nbgp ) * ( ny_a+1 + 2 * nbgp )
674       ngp_o = ( nx_o+1 + 2 * nbgp ) * ( ny_o+1 + 2 * nbgp )
[667]675
676!
[709]677!--    Determine if the horizontal grid and the number of PEs in ocean and
678!--    atmosphere is same or not
679       IF ( nx_o == nx_a  .AND.  ny_o == ny_a  .AND.  &
[667]680            pdims(1) == pdims_remote(1) .AND. pdims(2) == pdims_remote(2) ) &
681       THEN
682          coupling_topology = 0
683       ELSE
684          coupling_topology = 1
685       ENDIF 
686
687!
688!--    Determine the target PEs for the exchange between ocean and
689!--    atmosphere (comm2d)
[709]690       IF ( coupling_topology == 0 )  THEN
691!
692!--       In case of identical topologies, every atmosphere PE has exactly one
693!--       ocean PE counterpart and vice versa
694          IF ( TRIM( coupling_mode ) == 'atmosphere_to_ocean' ) THEN
[667]695             target_id = myid + numprocs
696          ELSE
697             target_id = myid 
698          ENDIF
699
700       ELSE
701!
702!--       In case of nonequivalent topology in ocean and atmosphere only for
703!--       PE0 in ocean and PE0 in atmosphere a target_id is needed, since
[709]704!--       data echxchange between ocean and atmosphere will be done only
705!--       between these PEs.   
706          IF ( myid == 0 )  THEN
707
708             IF ( TRIM( coupling_mode ) == 'atmosphere_to_ocean' )  THEN
[667]709                target_id = numprocs 
710             ELSE
711                target_id = 0
712             ENDIF
[709]713
[667]714          ENDIF
[709]715
[667]716       ENDIF
717
718    ENDIF
719
720
[102]721#endif
722
[1]723#else
724
725!
726!-- Array bounds when running on a single PE (respectively a non-parallel
727!-- machine)
[1003]728    nxl = 0
729    nxr = nx
730    nnx = nxr - nxl + 1
731    nys = 0
732    nyn = ny
733    nny = nyn - nys + 1
734    nzb = 0
735    nzt = nz
736    nnz = nz
[1]737
[145]738    ALLOCATE( hor_index_bounds(4,0:0) )
739    hor_index_bounds(1,0) = nxl
740    hor_index_bounds(2,0) = nxr
741    hor_index_bounds(3,0) = nys
742    hor_index_bounds(4,0) = nyn
743
[1]744!
745!-- Array bounds for the pressure solver (in the parallel code, these bounds
746!-- are the ones for the transposed arrays)
[1003]747    nys_x = nys
748    nyn_x = nyn
749    nzb_x = nzb + 1
750    nzt_x = nzt
[1]751
[1003]752    nxl_y = nxl
753    nxr_y = nxr
754    nzb_y = nzb + 1
755    nzt_y = nzt
[1]756
[1003]757    nxl_z = nxl
758    nxr_z = nxr
759    nys_z = nys
760    nyn_z = nyn
[1]761
762#endif
763
764!
765!-- Calculate number of grid levels necessary for the multigrid poisson solver
766!-- as well as the gridpoint indices on each level
767    IF ( psolver == 'multigrid' )  THEN
768
769!
770!--    First calculate number of possible grid levels for the subdomains
771       mg_levels_x = 1
772       mg_levels_y = 1
773       mg_levels_z = 1
774
775       i = nnx
776       DO WHILE ( MOD( i, 2 ) == 0  .AND.  i /= 2 )
777          i = i / 2
778          mg_levels_x = mg_levels_x + 1
779       ENDDO
780
781       j = nny
782       DO WHILE ( MOD( j, 2 ) == 0  .AND.  j /= 2 )
783          j = j / 2
784          mg_levels_y = mg_levels_y + 1
785       ENDDO
786
[181]787       k = nz    ! do not use nnz because it might be > nz due to transposition
788                 ! requirements
[1]789       DO WHILE ( MOD( k, 2 ) == 0  .AND.  k /= 2 )
790          k = k / 2
791          mg_levels_z = mg_levels_z + 1
792       ENDDO
793
794       maximum_grid_level = MIN( mg_levels_x, mg_levels_y, mg_levels_z )
795
796!
797!--    Find out, if the total domain allows more levels. These additional
[709]798!--    levels are identically processed on all PEs.
[197]799       IF ( numprocs > 1  .AND.  mg_switch_to_pe0_level /= -1 )  THEN
[709]800
[1]801          IF ( mg_levels_z > MIN( mg_levels_x, mg_levels_y ) )  THEN
[709]802
[1]803             mg_switch_to_pe0_level_l = maximum_grid_level
804
805             mg_levels_x = 1
806             mg_levels_y = 1
807
808             i = nx+1
809             DO WHILE ( MOD( i, 2 ) == 0  .AND.  i /= 2 )
810                i = i / 2
811                mg_levels_x = mg_levels_x + 1
812             ENDDO
813
814             j = ny+1
815             DO WHILE ( MOD( j, 2 ) == 0  .AND.  j /= 2 )
816                j = j / 2
817                mg_levels_y = mg_levels_y + 1
818             ENDDO
819
820             maximum_grid_level_l = MIN( mg_levels_x, mg_levels_y, mg_levels_z )
821
822             IF ( maximum_grid_level_l > mg_switch_to_pe0_level_l )  THEN
823                mg_switch_to_pe0_level_l = maximum_grid_level_l - &
824                                           mg_switch_to_pe0_level_l + 1
825             ELSE
826                mg_switch_to_pe0_level_l = 0
827             ENDIF
[709]828
[1]829          ELSE
830             mg_switch_to_pe0_level_l = 0
831             maximum_grid_level_l = maximum_grid_level
[709]832
[1]833          ENDIF
834
835!
836!--       Use switch level calculated above only if it is not pre-defined
837!--       by user
838          IF ( mg_switch_to_pe0_level == 0 )  THEN
839             IF ( mg_switch_to_pe0_level_l /= 0 )  THEN
840                mg_switch_to_pe0_level = mg_switch_to_pe0_level_l
841                maximum_grid_level     = maximum_grid_level_l
842             ENDIF
843
844          ELSE
845!
846!--          Check pre-defined value and reset to default, if neccessary
847             IF ( mg_switch_to_pe0_level < mg_switch_to_pe0_level_l  .OR.  &
848                  mg_switch_to_pe0_level >= maximum_grid_level_l )  THEN
[254]849                message_string = 'mg_switch_to_pe0_level ' // &
850                                 'out of range and reset to default (=0)'
851                CALL message( 'init_pegrid', 'PA0235', 0, 1, 0, 6, 0 )
[1]852                mg_switch_to_pe0_level = 0
853             ELSE
854!
855!--             Use the largest number of possible levels anyway and recalculate
856!--             the switch level to this largest number of possible values
857                maximum_grid_level = maximum_grid_level_l
858
859             ENDIF
[709]860
[1]861          ENDIF
862
863       ENDIF
864
[1056]865       ALLOCATE( grid_level_count(maximum_grid_level),                       &
866                 nxl_mg(0:maximum_grid_level), nxr_mg(0:maximum_grid_level), &
867                 nyn_mg(0:maximum_grid_level), nys_mg(0:maximum_grid_level), &
868                 nzt_mg(0:maximum_grid_level) )
[1]869
870       grid_level_count = 0
[1056]871!
872!--    Index zero required as dummy due to definition of arrays f2 and p2 in
873!--    recursive subroutine next_mg_level
874       nxl_mg(0) = 0; nxr_mg(0) = 0; nyn_mg(0) = 0; nys_mg(0) = 0; nzt_mg(0) = 0
[778]875
[1]876       nxl_l = nxl; nxr_l = nxr; nys_l = nys; nyn_l = nyn; nzt_l = nzt
877
878       DO  i = maximum_grid_level, 1 , -1
879
880          IF ( i == mg_switch_to_pe0_level )  THEN
[809]881#if defined( __parallel ) && ! defined( __check )
[1]882!
883!--          Save the grid size of the subdomain at the switch level, because
884!--          it is needed in poismg.
885             ind(1) = nxl_l; ind(2) = nxr_l
886             ind(3) = nys_l; ind(4) = nyn_l
887             ind(5) = nzt_l
888             ALLOCATE( ind_all(5*numprocs), mg_loc_ind(5,0:numprocs-1) )
889             CALL MPI_ALLGATHER( ind, 5, MPI_INTEGER, ind_all, 5, &
890                                 MPI_INTEGER, comm2d, ierr )
891             DO  j = 0, numprocs-1
892                DO  k = 1, 5
893                   mg_loc_ind(k,j) = ind_all(k+j*5)
894                ENDDO
895             ENDDO
896             DEALLOCATE( ind_all )
897!
[709]898!--          Calculate the grid size of the total domain
[1]899             nxr_l = ( nxr_l-nxl_l+1 ) * pdims(1) - 1
900             nxl_l = 0
901             nyn_l = ( nyn_l-nys_l+1 ) * pdims(2) - 1
902             nys_l = 0
903!
904!--          The size of this gathered array must not be larger than the
905!--          array tend, which is used in the multigrid scheme as a temporary
[778]906!--          array. Therefore the subdomain size of an PE is calculated and
907!--          the size of the gathered grid. These values are used in 
908!--          routines pres and poismg
909             subdomain_size = ( nxr - nxl + 2 * nbgp + 1 ) * &
910                              ( nyn - nys + 2 * nbgp + 1 ) * ( nzt - nzb + 2 )
[1]911             gathered_size  = ( nxr_l - nxl_l + 3 ) * ( nyn_l - nys_l + 3 ) * &
912                              ( nzt_l - nzb + 2 )
913
[809]914#elif ! defined ( __parallel )
[254]915             message_string = 'multigrid gather/scatter impossible ' // &
[1]916                          'in non parallel mode'
[254]917             CALL message( 'init_pegrid', 'PA0237', 1, 2, 0, 6, 0 )
[1]918#endif
919          ENDIF
920
921          nxl_mg(i) = nxl_l
922          nxr_mg(i) = nxr_l
923          nys_mg(i) = nys_l
924          nyn_mg(i) = nyn_l
925          nzt_mg(i) = nzt_l
926
927          nxl_l = nxl_l / 2 
928          nxr_l = nxr_l / 2
929          nys_l = nys_l / 2 
930          nyn_l = nyn_l / 2 
931          nzt_l = nzt_l / 2 
[778]932
[1]933       ENDDO
934
[780]935!
936!--    Temporary problem: Currently calculation of maxerror iin routine poismg crashes
937!--    if grid data are collected on PE0 already on the finest grid level.
938!--    To be solved later.
939       IF ( maximum_grid_level == mg_switch_to_pe0_level )  THEN
940          message_string = 'grid coarsening on subdomain level cannot be performed'
941          CALL message( 'poismg', 'PA0236', 1, 2, 0, 6, 0 )
942       ENDIF
943
[1]944    ELSE
945
[667]946       maximum_grid_level = 0
[1]947
948    ENDIF
949
[722]950!
951!-- Default level 0 tells exchange_horiz that all ghost planes have to be
952!-- exchanged. grid_level is adjusted in poismg, where only one ghost plane
953!-- is required.
954    grid_level = 0
[1]955
[809]956#if defined( __parallel ) && ! defined ( __check )
[1]957!
958!-- Gridpoint number for the exchange of ghost points (y-line for 2D-arrays)
[667]959    ngp_y  = nyn - nys + 1 + 2 * nbgp
[1]960
961!
[709]962!-- Define new MPI derived datatypes for the exchange of ghost points in
963!-- x- and y-direction for 2D-arrays (line)
964    CALL MPI_TYPE_VECTOR( nxr-nxl+1+2*nbgp, nbgp, ngp_y, MPI_REAL, type_x, &
965                          ierr )
[1]966    CALL MPI_TYPE_COMMIT( type_x, ierr )
[709]967    CALL MPI_TYPE_VECTOR( nxr-nxl+1+2*nbgp, nbgp, ngp_y, MPI_INTEGER, &
968                          type_x_int, ierr )
[1]969    CALL MPI_TYPE_COMMIT( type_x_int, ierr )
970
[667]971    CALL MPI_TYPE_VECTOR( nbgp, ngp_y, ngp_y, MPI_REAL, type_y, ierr )
972    CALL MPI_TYPE_COMMIT( type_y, ierr )
973    CALL MPI_TYPE_VECTOR( nbgp, ngp_y, ngp_y, MPI_INTEGER, type_y_int, ierr )
974    CALL MPI_TYPE_COMMIT( type_y_int, ierr )
975
976
[1]977!
978!-- Calculate gridpoint numbers for the exchange of ghost points along x
979!-- (yz-plane for 3D-arrays) and define MPI derived data type(s) for the
980!-- exchange of ghost points in y-direction (xz-plane).
981!-- Do these calculations for the model grid and (if necessary) also
982!-- for the coarser grid levels used in the multigrid method
[667]983    ALLOCATE ( ngp_yz(0:maximum_grid_level), type_xz(0:maximum_grid_level),&
984               type_yz(0:maximum_grid_level) )
[1]985
986    nxl_l = nxl; nxr_l = nxr; nys_l = nys; nyn_l = nyn; nzb_l = nzb; nzt_l = nzt
[709]987
[667]988!
989!-- Discern between the model grid, which needs nbgp ghost points and
990!-- grid levels for the multigrid scheme. In the latter case only one
991!-- ghost point is necessary.
[709]992!-- First definition of MPI-datatypes for exchange of ghost layers on normal
[667]993!-- grid. The following loop is needed for data exchange in poismg.f90.
994!
995!-- Determine number of grid points of yz-layer for exchange
996    ngp_yz(0) = (nzt - nzb + 2) * (nyn - nys + 1 + 2 * nbgp)
[709]997
[667]998!
[709]999!-- Define an MPI-datatype for the exchange of left/right boundaries.
1000!-- Although data are contiguous in physical memory (which does not
1001!-- necessarily require an MPI-derived datatype), the data exchange between
1002!-- left and right PE's using the MPI-derived type is 10% faster than without.
[667]1003    CALL MPI_TYPE_VECTOR( nxr-nxl+1+2*nbgp, nbgp*(nzt-nzb+2), ngp_yz(0), &
[709]1004                          MPI_REAL, type_xz(0), ierr )
[667]1005    CALL MPI_TYPE_COMMIT( type_xz(0), ierr )
[1]1006
[709]1007    CALL MPI_TYPE_VECTOR( nbgp, ngp_yz(0), ngp_yz(0), MPI_REAL, type_yz(0), &
1008                          ierr ) 
[667]1009    CALL MPI_TYPE_COMMIT( type_yz(0), ierr )
[709]1010
[667]1011!
[709]1012!-- Definition of MPI-datatypes for multigrid method (coarser level grids)
[667]1013    IF ( psolver == 'multigrid' )  THEN
1014!   
[709]1015!--    Definition of MPI-datatyoe as above, but only 1 ghost level is used
1016       DO  i = maximum_grid_level, 1 , -1
1017
[667]1018          ngp_yz(i) = (nzt_l - nzb_l + 2) * (nyn_l - nys_l + 3)
1019
1020          CALL MPI_TYPE_VECTOR( nxr_l-nxl_l+3, nzt_l-nzb_l+2, ngp_yz(i), &
[709]1021                                MPI_REAL, type_xz(i), ierr )
[667]1022          CALL MPI_TYPE_COMMIT( type_xz(i), ierr )
[1]1023
[709]1024          CALL MPI_TYPE_VECTOR( 1, ngp_yz(i), ngp_yz(i), MPI_REAL, type_yz(i), &
1025                                ierr )
[667]1026          CALL MPI_TYPE_COMMIT( type_yz(i), ierr )
1027
1028          nxl_l = nxl_l / 2
1029          nxr_l = nxr_l / 2
1030          nys_l = nys_l / 2
1031          nyn_l = nyn_l / 2
1032          nzt_l = nzt_l / 2
[709]1033
[667]1034       ENDDO
[709]1035
1036    ENDIF
[1]1037#endif
1038
[809]1039#if defined( __parallel ) && ! defined ( __check )
[1]1040!
1041!-- Setting of flags for inflow/outflow conditions in case of non-cyclic
[106]1042!-- horizontal boundary conditions.
[1]1043    IF ( pleft == MPI_PROC_NULL )  THEN
[1159]1044       IF ( bc_lr == 'dirichlet/radiation' )  THEN
[1]1045          inflow_l  = .TRUE.
[1159]1046       ELSEIF ( bc_lr == 'radiation/dirichlet' )  THEN
[1]1047          outflow_l = .TRUE.
1048       ENDIF
1049    ENDIF
1050
1051    IF ( pright == MPI_PROC_NULL )  THEN
[1159]1052       IF ( bc_lr == 'dirichlet/radiation' )  THEN
[1]1053          outflow_r = .TRUE.
[1159]1054       ELSEIF ( bc_lr == 'radiation/dirichlet' )  THEN
[1]1055          inflow_r  = .TRUE.
1056       ENDIF
1057    ENDIF
1058
1059    IF ( psouth == MPI_PROC_NULL )  THEN
[1159]1060       IF ( bc_ns == 'dirichlet/radiation' )  THEN
[1]1061          outflow_s = .TRUE.
[1159]1062       ELSEIF ( bc_ns == 'radiation/dirichlet' )  THEN
[1]1063          inflow_s  = .TRUE.
1064       ENDIF
1065    ENDIF
1066
1067    IF ( pnorth == MPI_PROC_NULL )  THEN
[1159]1068       IF ( bc_ns == 'dirichlet/radiation' )  THEN
[1]1069          inflow_n  = .TRUE.
[1159]1070       ELSEIF ( bc_ns == 'radiation/dirichlet' )  THEN
[1]1071          outflow_n = .TRUE.
1072       ENDIF
1073    ENDIF
1074
[151]1075!
1076!-- Broadcast the id of the inflow PE
1077    IF ( inflow_l )  THEN
[163]1078       id_inflow_l = myidx
[151]1079    ELSE
1080       id_inflow_l = 0
1081    ENDIF
[622]1082    IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
[151]1083    CALL MPI_ALLREDUCE( id_inflow_l, id_inflow, 1, MPI_INTEGER, MPI_SUM, &
1084                        comm1dx, ierr )
1085
[163]1086!
1087!-- Broadcast the id of the recycling plane
1088!-- WARNING: needs to be adjusted in case of inflows other than from left side!
[1139]1089    IF ( NINT( recycling_width / dx ) >= nxl  .AND. &
1090         NINT( recycling_width / dx ) <= nxr )  THEN
[163]1091       id_recycling_l = myidx
1092    ELSE
1093       id_recycling_l = 0
1094    ENDIF
[622]1095    IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
[163]1096    CALL MPI_ALLREDUCE( id_recycling_l, id_recycling, 1, MPI_INTEGER, MPI_SUM, &
1097                        comm1dx, ierr )
1098
[809]1099#elif ! defined ( __parallel )
[1159]1100    IF ( bc_lr == 'dirichlet/radiation' )  THEN
[1]1101       inflow_l  = .TRUE.
1102       outflow_r = .TRUE.
[1159]1103    ELSEIF ( bc_lr == 'radiation/dirichlet' )  THEN
[1]1104       outflow_l = .TRUE.
1105       inflow_r  = .TRUE.
1106    ENDIF
1107
[1159]1108    IF ( bc_ns == 'dirichlet/radiation' )  THEN
[1]1109       inflow_n  = .TRUE.
1110       outflow_s = .TRUE.
[1159]1111    ELSEIF ( bc_ns == 'radiation/dirichlet' )  THEN
[1]1112       outflow_n = .TRUE.
1113       inflow_s  = .TRUE.
1114    ENDIF
1115#endif
[807]1116
[106]1117!
[978]1118!-- At the inflow or outflow, u or v, respectively, have to be calculated for
1119!-- one more grid point.
1120    IF ( inflow_l .OR. outflow_l )  THEN
[106]1121       nxlu = nxl + 1
1122    ELSE
1123       nxlu = nxl
1124    ENDIF
[978]1125    IF ( inflow_s .OR. outflow_s )  THEN
[106]1126       nysv = nys + 1
1127    ELSE
1128       nysv = nys
1129    ENDIF
[1]1130
[114]1131!
1132!-- Allocate wall flag arrays used in the multigrid solver
1133    IF ( psolver == 'multigrid' )  THEN
1134
1135       DO  i = maximum_grid_level, 1, -1
1136
1137           SELECT CASE ( i )
1138
1139              CASE ( 1 )
1140                 ALLOCATE( wall_flags_1(nzb:nzt_mg(i)+1,         &
1141                                        nys_mg(i)-1:nyn_mg(i)+1, &
1142                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1143
1144              CASE ( 2 )
1145                 ALLOCATE( wall_flags_2(nzb:nzt_mg(i)+1,         &
1146                                        nys_mg(i)-1:nyn_mg(i)+1, &
1147                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1148
1149              CASE ( 3 )
1150                 ALLOCATE( wall_flags_3(nzb:nzt_mg(i)+1,         &
1151                                        nys_mg(i)-1:nyn_mg(i)+1, &
1152                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1153
1154              CASE ( 4 )
1155                 ALLOCATE( wall_flags_4(nzb:nzt_mg(i)+1,         &
1156                                        nys_mg(i)-1:nyn_mg(i)+1, &
1157                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1158
1159              CASE ( 5 )
1160                 ALLOCATE( wall_flags_5(nzb:nzt_mg(i)+1,         &
1161                                        nys_mg(i)-1:nyn_mg(i)+1, &
1162                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1163
1164              CASE ( 6 )
1165                 ALLOCATE( wall_flags_6(nzb:nzt_mg(i)+1,         &
1166                                        nys_mg(i)-1:nyn_mg(i)+1, &
1167                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1168
1169              CASE ( 7 )
1170                 ALLOCATE( wall_flags_7(nzb:nzt_mg(i)+1,         &
1171                                        nys_mg(i)-1:nyn_mg(i)+1, &
1172                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1173
1174              CASE ( 8 )
1175                 ALLOCATE( wall_flags_8(nzb:nzt_mg(i)+1,         &
1176                                        nys_mg(i)-1:nyn_mg(i)+1, &
1177                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1178
1179              CASE ( 9 )
1180                 ALLOCATE( wall_flags_9(nzb:nzt_mg(i)+1,         &
1181                                        nys_mg(i)-1:nyn_mg(i)+1, &
1182                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1183
1184              CASE ( 10 )
1185                 ALLOCATE( wall_flags_10(nzb:nzt_mg(i)+1,        &
1186                                        nys_mg(i)-1:nyn_mg(i)+1, &
1187                                        nxl_mg(i)-1:nxr_mg(i)+1) )
1188
1189              CASE DEFAULT
[254]1190                 message_string = 'more than 10 multigrid levels'
1191                 CALL message( 'init_pegrid', 'PA0238', 1, 2, 0, 6, 0 )
[114]1192
1193          END SELECT
1194
1195       ENDDO
1196
1197    ENDIF
1198
[759]1199!
1200!-- Calculate the number of groups into which parallel I/O is split.
1201!-- The default for files which are opened by all PEs (or where each
1202!-- PE opens his own independent file) is, that all PEs are doing input/output
1203!-- in parallel at the same time. This might cause performance or even more
1204!-- severe problems depending on the configuration of the underlying file
1205!-- system.
1206!-- First, set the default:
1207    IF ( maximum_parallel_io_streams == -1  .OR. &
1208         maximum_parallel_io_streams > numprocs )  THEN
1209       maximum_parallel_io_streams = numprocs
1210    ENDIF
1211
1212!
1213!-- Now calculate the number of io_blocks and the io_group to which the
1214!-- respective PE belongs. I/O of the groups is done in serial, but in parallel
1215!-- for all PEs belonging to the same group. A preliminary setting with myid
1216!-- based on MPI_COMM_WORLD has been done in parin.
1217    io_blocks = numprocs / maximum_parallel_io_streams
1218    io_group  = MOD( myid+1, io_blocks )
1219   
1220
[1]1221 END SUBROUTINE init_pegrid
Note: See TracBrowser for help on using the repository browser.