Ignore:
Timestamp:
Dec 10, 2010 8:08:13 AM (13 years ago)
Author:
raasch
Message:

New:
---

Optional barriers included in order to speed up collective operations
MPI_ALLTOALL and MPI_ALLREDUCE. This feature is controlled with new initial
parameter collective_wait. Default is .FALSE, but .TRUE. on SGI-type
systems. (advec_particles, advec_s_bc, buoyancy, check_for_restart,
cpu_statistics, data_output_2d, data_output_ptseries, flow_statistics,
global_min_max, inflow_turbulence, init_3d_model, init_particles, init_pegrid,
init_slope, parin, pres, poismg, set_particle_attributes, timestep,
read_var_list, user_statistics, write_compressed, write_var_list)

Adjustments for Kyushu Univ. (lcrte, ibmku). Concerning hybrid
(MPI/openMP) runs, the number of openMP threads per MPI tasks can now
be given as an argument to mrun-option -O. (mbuild, mrun, subjob)

Changed:


Initialization of the module command changed for SGI-ICE/lcsgi (mbuild, subjob)

Errors:


File:
1 edited

Legend:

Unmodified
Added
Removed
  • palm/trunk/SOURCE/init_3d_model.f90

    r561 r622  
    77! Current revisions:
    88! -----------------
    9 !
     9! optional barriers included in order to speed up collective operations
    1010!
    1111! Former revisions:
     
    860860
    861861#if defined( __parallel )
     862          IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    862863          CALL MPI_ALLREDUCE( volume_flow_initial_l(1), volume_flow_initial(1),&
    863864                              2, MPI_REAL, MPI_SUM, comm2d, ierr )
     865          IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    864866          CALL MPI_ALLREDUCE( volume_flow_area_l(1), volume_flow_area(1),      &
    865867                              2, MPI_REAL, MPI_SUM, comm2d, ierr )
     
    11721174
    11731175#if defined( __parallel )
     1176          IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    11741177          CALL MPI_ALLREDUCE( volume_flow_initial_l(1), volume_flow_initial(1),&
    11751178                              2, MPI_REAL, MPI_SUM, comm2d, ierr )
     1179          IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    11761180          CALL MPI_ALLREDUCE( volume_flow_area_l(1), volume_flow_area(1),      &
    11771181                              2, MPI_REAL, MPI_SUM, comm2d, ierr )
     
    15601564    sr = statistic_regions + 1
    15611565#if defined( __parallel )
     1566    IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    15621567    CALL MPI_ALLREDUCE( ngp_2dh_l(0), ngp_2dh(0), sr, MPI_INTEGER, MPI_SUM,   &
    15631568                        comm2d, ierr )
     1569    IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    15641570    CALL MPI_ALLREDUCE( ngp_2dh_outer_l(0,0), ngp_2dh_outer(0,0), (nz+2)*sr,  &
    15651571                        MPI_INTEGER, MPI_SUM, comm2d, ierr )
     1572    IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    15661573    CALL MPI_ALLREDUCE( ngp_2dh_s_inner_l(0,0), ngp_2dh_s_inner(0,0),         &
    15671574                        (nz+2)*sr, MPI_INTEGER, MPI_SUM, comm2d, ierr )
     1575    IF ( collective_wait )  CALL MPI_BARRIER( comm2d, ierr )
    15681576    CALL MPI_ALLREDUCE( ngp_3d_inner_l(0), ngp_3d_inner_tmp(0), sr, MPI_REAL, &
    15691577                        MPI_SUM, comm2d, ierr )
Note: See TracChangeset for help on using the changeset viewer.